Pub Date : 2020-02-08DOI: 10.22044/JADM.2020.8605.1997
A. Kashani, M. Ghanbari, A. Rahmani
Vehicular ad hoc networks are an emerging technology with an extensive capability in various applications including vehicles safety, traffic management and intelligent transportation systems. Considering the high mobility of vehicles and their inhomogeneous distributions, designing an efficient routing protocol seems necessary. Given the fact that a road is crowded at some sections and is not crowded at the others, the routing protocol should be able to dynamically make decisions. On the other hand, VANET networks environment is vulnerable at the time of data transmission. Broadcast routing, similar to opportunistic routing, could offer better efficiency compared to other protocols. In this paper, a fuzzy logic opportunistic routing (FLOR) protocol is presented in which the packet rebroadcasting decision-making process is carried out through the fuzzy logic system along with three input parameters of packet advancement, local density, and the number of duplicated delivered packets. The rebroadcasting procedures use the value of these parameters as inputs to the fuzzy logic system to resolve the issue of multicasting, considering the crowded and sparse zones. NS-2 simulator is used for evaluating the performance of the proposed FLOR protocol in terms of packet delivery ratio, the end-to-end delay, and the network throughput compared with the existing protocols such as: FLOODING, P-PERSISTENCE and FUZZBR. The performance comparison also emphasizes on effective utilization of the resources. Simulations on highway environment show that the proposed protocol has a better QoS efficiency compared to the above published methods in the literature
{"title":"Improving Performance of Opportunistic Routing Protocol using Fuzzy Logic for Vehicular Ad-hoc Networks in Highways","authors":"A. Kashani, M. Ghanbari, A. Rahmani","doi":"10.22044/JADM.2020.8605.1997","DOIUrl":"https://doi.org/10.22044/JADM.2020.8605.1997","url":null,"abstract":"Vehicular ad hoc networks are an emerging technology with an extensive capability in various applications including vehicles safety, traffic management and intelligent transportation systems. Considering the high mobility of vehicles and their inhomogeneous distributions, designing an efficient routing protocol seems necessary. Given the fact that a road is crowded at some sections and is not crowded at the others, the routing protocol should be able to dynamically make decisions. On the other hand, VANET networks environment is vulnerable at the time of data transmission. Broadcast routing, similar to opportunistic routing, could offer better efficiency compared to other protocols. In this paper, a fuzzy logic opportunistic routing (FLOR) protocol is presented in which the packet rebroadcasting decision-making process is carried out through the fuzzy logic system along with three input parameters of packet advancement, local density, and the number of duplicated delivered packets. The rebroadcasting procedures use the value of these parameters as inputs to the fuzzy logic system to resolve the issue of multicasting, considering the crowded and sparse zones. NS-2 simulator is used for evaluating the performance of the proposed FLOR protocol in terms of packet delivery ratio, the end-to-end delay, and the network throughput compared with the existing protocols such as: FLOODING, P-PERSISTENCE and FUZZBR. The performance comparison also emphasizes on effective utilization of the resources. Simulations on highway environment show that the proposed protocol has a better QoS efficiency compared to the above published methods in the literature","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47670303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.7934.1931
A. Moradi, A. A. Seyedkolaei, S. Hosseini
Software defined network is a new computer network architecture who separates controller and data layer in network devices such as switches and routers. By the emerge of software defined networks, a class of location problems, called controller placement problem, has attracted much more research attention. The task in the problem is to simultaneously find optimal number and location of controllers satisfying a set of routing and capacity constraints. In this paper, we suggest an effective solution method based on the so-called Iterated Local Search (ILS) strategy. We then, compare our method to an existing standard mathematical programming solver on an extensive set of problem instances. It turns out that our suggested method is computationally much more effective and efficient over middle to large instances of the problem.
{"title":"Controller Placement in Software Defined Network using Iterated Local Search","authors":"A. Moradi, A. A. Seyedkolaei, S. Hosseini","doi":"10.22044/JADM.2019.7934.1931","DOIUrl":"https://doi.org/10.22044/JADM.2019.7934.1931","url":null,"abstract":"Software defined network is a new computer network architecture who separates controller and data layer in network devices such as switches and routers. By the emerge of software defined networks, a class of location problems, called controller placement problem, has attracted much more research attention. The task in the problem is to simultaneously find optimal number and location of controllers satisfying a set of routing and capacity constraints. In this paper, we suggest an effective solution method based on the so-called Iterated Local Search (ILS) strategy. We then, compare our method to an existing standard mathematical programming solver on an extensive set of problem instances. It turns out that our suggested method is computationally much more effective and efficient over middle to large instances of the problem.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.7145.1842
S. Roohollahi, A. K. Bardsiri, F. Keynia
Social networks are streaming, diverse and include a wide range of edges so that continuously evolves over time and formed by the activities among users (such as tweets, emails, etc.), where each activity among its users, adds an edge to the network graph. Despite their popularities, the dynamicity and large size of most social networks make it difficult or impossible to study the entire network. This paper proposes a sampling algorithm that equipped with an evaluator unit for analyzing the edges and a set of simple fixed structure learning automata. Evaluator unit evaluates each edge and then decides whether edge and corresponding node should be added to the sample set. In The proposed algorithm, each main activity graph node is equipped with a simple learning automaton. The proposed algorithm is compared with the best current sampling algorithm that was reported in the Kolmogorov-Smirnov test (KS) and normalized L1 and L2 distances in real networks and synthetic networks presented as a sequence of edges. Experimental results show the superiority of the proposed algorithm.
{"title":"Using an Evaluator Fixed Structure Learning Automata in Sampling of Social Networks","authors":"S. Roohollahi, A. K. Bardsiri, F. Keynia","doi":"10.22044/JADM.2019.7145.1842","DOIUrl":"https://doi.org/10.22044/JADM.2019.7145.1842","url":null,"abstract":"Social networks are streaming, diverse and include a wide range of edges so that continuously evolves over time and formed by the activities among users (such as tweets, emails, etc.), where each activity among its users, adds an edge to the network graph. Despite their popularities, the dynamicity and large size of most social networks make it difficult or impossible to study the entire network. This paper proposes a sampling algorithm that equipped with an evaluator unit for analyzing the edges and a set of simple fixed structure learning automata. Evaluator unit evaluates each edge and then decides whether edge and corresponding node should be added to the sample set. In The proposed algorithm, each main activity graph node is equipped with a simple learning automaton. The proposed algorithm is compared with the best current sampling algorithm that was reported in the Kolmogorov-Smirnov test (KS) and normalized L1 and L2 distances in real networks and synthetic networks presented as a sequence of edges. Experimental results show the superiority of the proposed algorithm.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.7207.1854
A. A. Irani, R. Pourgholi
Ray Casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently powerful voxel parsing mechanism for object distinction. In this work, we are proposing an image processing based approach towards enhancing ray casting technique for object distinction process. The rendering mode is modified to accommodate masking information generated by a K-means based hybrid segmentation algorithm. An effective set of image processing techniques are creatively employed in construction of a generic segmentation system capable of generating object membership information.
{"title":"Segmentation Assisted Object Distinction for Direct Volume Rendering","authors":"A. A. Irani, R. Pourgholi","doi":"10.22044/JADM.2019.7207.1854","DOIUrl":"https://doi.org/10.22044/JADM.2019.7207.1854","url":null,"abstract":"Ray Casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently powerful voxel parsing mechanism for object distinction. In this work, we are proposing an image processing based approach towards enhancing ray casting technique for object distinction process. The rendering mode is modified to accommodate masking information generated by a K-means based hybrid segmentation algorithm. An effective set of image processing techniques are creatively employed in construction of a generic segmentation system capable of generating object membership information.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.7461.1888
S. Adeli, P. Moradi
Since, most of the organizations present their services electronically, the number of functionally-equivalent web services is increasing as well as the number of users that employ those web services. Consequently, plenty of information is generated by the users and the web services that lead to the users be in trouble in finding their appropriate web services. Therefore, it is required to provide a recommendation method for predicting the quality of web services (QoS) and recommending web services. Most of the existing collaborative filtering approaches don’t operate efficiently in recommending web services due to ignoring some effective factors such as dependency among users/web services, the popularity of users/web services, and the location of web services/users. In this paper, a web service recommendation method called Popular-Dependent Collaborative Filtering (PDCF) is proposed. The proposed method handles QoS differences experienced by the users as well as the dependency among users on a specific web service using the user/web service dependency factor. Additionally, the user/web service popularity factor is considered in the PDCF that significantly enhances its effectiveness. We also proposed a location-aware method called LPDCF which considers the location of web services into the recommendation process of the PDCF. A set of experiments is conducted to evaluate the performance of the PDCF and investigating the impression of the matrix factorization model on the efficiency of the PDCF with two real-world datasets. The results indicate that the PDCF outperforms other competing methods in most cases.
{"title":"QoS-based Web Service Recommendation using Popular-dependent Collaborative Filtering","authors":"S. Adeli, P. Moradi","doi":"10.22044/JADM.2019.7461.1888","DOIUrl":"https://doi.org/10.22044/JADM.2019.7461.1888","url":null,"abstract":"Since, most of the organizations present their services electronically, the number of functionally-equivalent web services is increasing as well as the number of users that employ those web services. Consequently, plenty of information is generated by the users and the web services that lead to the users be in trouble in finding their appropriate web services. Therefore, it is required to provide a recommendation method for predicting the quality of web services (QoS) and recommending web services. Most of the existing collaborative filtering approaches don’t operate efficiently in recommending web services due to ignoring some effective factors such as dependency among users/web services, the popularity of users/web services, and the location of web services/users. In this paper, a web service recommendation method called Popular-Dependent Collaborative Filtering (PDCF) is proposed. The proposed method handles QoS differences experienced by the users as well as the dependency among users on a specific web service using the user/web service dependency factor. Additionally, the user/web service popularity factor is considered in the PDCF that significantly enhances its effectiveness. We also proposed a location-aware method called LPDCF which considers the location of web services into the recommendation process of the PDCF. A set of experiments is conducted to evaluate the performance of the PDCF and investigating the impression of the matrix factorization model on the efficiency of the PDCF with two real-world datasets. The results indicate that the PDCF outperforms other competing methods in most cases.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.6629.1781
M. Imani
Due to abundant spectral information contained in the hyperspectral images, they are suitable data for anomalous targets detection. The use of spatial features in addition to spectral ones can improve the anomaly detection performance. An anomaly detector, called nonparametric spectral-spatial detector (NSSD), is proposed in this work which utilizes the benefits of spatial features and local structures extracted by the morphological filters. The obtained spectral-spatial hypercube has high dimensionality. So, accurate estimates of the background statistics in small local windows may not be obtained. Applying conventional detectors such as Local Reed Xiaoli (RX) to the high dimensional data is not possible. To deal with this difficulty, a nonparametric distance, without any need to estimate the data statistics, is used instead of the Mahalanobis distance. According to the experimental results, the detection accuracy improvement of the proposed NSSD method compared to Global RX, Local RX, weighted RX, linear filtering based RX (LF-RX), background joint sparse representation detection (BJSRD), Kernel RX, subspace RX (SSRX) and RX and uniform target detector (RX-UTD) in average is 47.68%, 27.86%, 13.23%, 29.26%, 3.33%, 17.07%, 15.88%, and 44.25%, respectively.
{"title":"Nonparametric Spectral-Spatial Anomaly Detection","authors":"M. Imani","doi":"10.22044/JADM.2019.6629.1781","DOIUrl":"https://doi.org/10.22044/JADM.2019.6629.1781","url":null,"abstract":"Due to abundant spectral information contained in the hyperspectral images, they are suitable data for anomalous targets detection. The use of spatial features in addition to spectral ones can improve the anomaly detection performance. An anomaly detector, called nonparametric spectral-spatial detector (NSSD), is proposed in this work which utilizes the benefits of spatial features and local structures extracted by the morphological filters. The obtained spectral-spatial hypercube has high dimensionality. So, accurate estimates of the background statistics in small local windows may not be obtained. Applying conventional detectors such as Local Reed Xiaoli (RX) to the high dimensional data is not possible. To deal with this difficulty, a nonparametric distance, without any need to estimate the data statistics, is used instead of the Mahalanobis distance. According to the experimental results, the detection accuracy improvement of the proposed NSSD method compared to Global RX, Local RX, weighted RX, linear filtering based RX (LF-RX), background joint sparse representation detection (BJSRD), Kernel RX, subspace RX (SSRX) and RX and uniform target detector (RX-UTD) in average is 47.68%, 27.86%, 13.23%, 29.26%, 3.33%, 17.07%, 15.88%, and 44.25%, respectively.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.6908.1810
Z. Shaeiri, M. Karami, A. Aghagolzadeh
Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits of the perfect support set recovery. To reduce this gap, in this paper, the sufficient condition is enhanced. A specific form of a Joint Typicality decoder is used for the support recovery task. Two performance metrics are considered for the recovery validation; one, which considers exact support recovery, and the other which seeks partial support recovery. First, an upper bound is obtained on the error probability of the sparsity pattern recovery. Next, using the mentioned upper bound, sufficient number of measurements for reliable support recovery is derived. It is shown that the sufficient condition for reliable support recovery depends on three key parameters of the problem; the noise variance, the minimum nonzero entry of the unknown sparse vector and the sparsity level. Simulations are performed for different sparsity rate, different noise variances, and different distortion levels. The results show that for all the mentioned cases the proposed methodology increases convergence rate of upper bound of the error probability of support recovery significantly which leads to a lower error probability bound compared with previously proposed bounds.
{"title":"A Sharp Sufficient Condition for Sparsity Pattern Recovery","authors":"Z. Shaeiri, M. Karami, A. Aghagolzadeh","doi":"10.22044/JADM.2019.6908.1810","DOIUrl":"https://doi.org/10.22044/JADM.2019.6908.1810","url":null,"abstract":"Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits of the perfect support set recovery. To reduce this gap, in this paper, the sufficient condition is enhanced. A specific form of a Joint Typicality decoder is used for the support recovery task. Two performance metrics are considered for the recovery validation; one, which considers exact support recovery, and the other which seeks partial support recovery. First, an upper bound is obtained on the error probability of the sparsity pattern recovery. Next, using the mentioned upper bound, sufficient number of measurements for reliable support recovery is derived. It is shown that the sufficient condition for reliable support recovery depends on three key parameters of the problem; the noise variance, the minimum nonzero entry of the unknown sparse vector and the sparsity level. Simulations are performed for different sparsity rate, different noise variances, and different distortion levels. The results show that for all the mentioned cases the proposed methodology increases convergence rate of upper bound of the error probability of support recovery significantly which leads to a lower error probability bound compared with previously proposed bounds.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68375044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2018.5593.1670
N. Mohammadkarimi, V. Derhami
This paper proposes fuzzy modeling using obtained data. Fuzzy system is known as knowledge-based or rule-bases system. The most important part of fuzzy system is rule-base. One of problems of generation of fuzzy rule with training data is inconsistence data. Existence of inconsistence and uncertain states in training data causes high error in modeling. Here, Probability fuzzy system presents to improvement the above challenge. A zero order Sugeno fuzzy model used as fuzzy system structure. At first by using clustering obtains the number of rules and input membership functions. A set of candidate amounts for consequence parts of fuzzy rules is considered. Considering each pair of training data, according which rules fires and what is the output in the pair, the amount of probability of consequences candidates are change. In the next step, eligibility probability of each consequence candidate for all rules is determined. Finally, using these obtained probability, two probable outputs is generate for each input. The experimental results show superiority of the proposed approach rather than some available well-known approaches that makes reduce the number of rule and reduce system complexity.
{"title":"Improvement of Rule Generation Methods for Fuzzy Controller","authors":"N. Mohammadkarimi, V. Derhami","doi":"10.22044/JADM.2018.5593.1670","DOIUrl":"https://doi.org/10.22044/JADM.2018.5593.1670","url":null,"abstract":"This paper proposes fuzzy modeling using obtained data. Fuzzy system is known as knowledge-based or rule-bases system. The most important part of fuzzy system is rule-base. One of problems of generation of fuzzy rule with training data is inconsistence data. Existence of inconsistence and uncertain states in training data causes high error in modeling. Here, Probability fuzzy system presents to improvement the above challenge. A zero order Sugeno fuzzy model used as fuzzy system structure. At first by using clustering obtains the number of rules and input membership functions. A set of candidate amounts for consequence parts of fuzzy rules is considered. Considering each pair of training data, according which rules fires and what is the output in the pair, the amount of probability of consequences candidates are change. In the next step, eligibility probability of each consequence candidate for all rules is determined. Finally, using these obtained probability, two probable outputs is generate for each input. The experimental results show superiority of the proposed approach rather than some available well-known approaches that makes reduce the number of rule and reduce system complexity.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.7209.1855
M. Karami, A. M. Nia, M. Ehsanian
In this paper we address the problem of automatic arrangement of cameras in a 3D system to enhance the performance of depth acquisition procedure. Lacking ground truth or a priori information, a measure of uncertainty is required to assess the quality of reconstruction. The mathematical model of iso-disparity surfaces provides an efficient way to estimate the depth estimation uncertainty which is believed to be related to the baseline length, focal length, panning angle and the pixel resolution in a stereo vision system. Accordingly, we first present analytical relations for fast estimation of the embedded uncertainty in depth acquisition and then these relations, along with the 3D sampling arrangement are employed to define a cost function. The optimal camera arrangement will be determined by minimizing the cost function with respect to the system parameters and the required constraints. Finally, the proposed algorithm is implemented on some 3D models. The simulation results demonstrate significant improvement (up to 35%) in depth uncertainty in the achieved depth maps compared with the traditional rectified camera setup.
{"title":"Camera Arrangement in Visual 3D Systems using Iso-disparity Model to Enhance Depth Estimation Accuracy","authors":"M. Karami, A. M. Nia, M. Ehsanian","doi":"10.22044/JADM.2019.7209.1855","DOIUrl":"https://doi.org/10.22044/JADM.2019.7209.1855","url":null,"abstract":"In this paper we address the problem of automatic arrangement of cameras in a 3D system to enhance the performance of depth acquisition procedure. Lacking ground truth or a priori information, a measure of uncertainty is required to assess the quality of reconstruction. The mathematical model of iso-disparity surfaces provides an efficient way to estimate the depth estimation uncertainty which is believed to be related to the baseline length, focal length, panning angle and the pixel resolution in a stereo vision system. Accordingly, we first present analytical relations for fast estimation of the embedded uncertainty in depth acquisition and then these relations, along with the 3D sampling arrangement are employed to define a cost function. The optimal camera arrangement will be determined by minimizing the cost function with respect to the system parameters and the required constraints. Finally, the proposed algorithm is implemented on some 3D models. The simulation results demonstrate significant improvement (up to 35%) in depth uncertainty in the achieved depth maps compared with the traditional rectified camera setup.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22044/JADM.2019.8507.1989
M. Hajizadeh-Tahan, M. Ghasemzadeh
Learning models and related results depend on the quality of the input data. If raw data is not properly cleaned and structured, the results are tending to be incorrect. Therefore, discretization as one of the preprocessing techniques plays an important role in learning processes. The most important challenge in the discretization process is to reduce the number of features’ values. This operation should be applied in a way that relationships between the features are maintained and accuracy of the classification algorithms would increase. In this paper, a new evolutionary multi-objective algorithm is presented. The proposed algorithm uses three objective functions to achieve high-quality discretization. The first and second objectives minimize the number of selected cut points and classification error, respectively. The third objective introduces a new criterion called the normalized cut, which uses the relationships between their features’ values to maintain the nature of the data. The performance of the proposed algorithm was tested using 20 benchmark datasets. According to the comparisons and the results of nonparametric statistical tests, the proposed algorithm has a better performance than other existing major methods.
{"title":"An Evolutionary Multi-objective Discretization based on Normalized Cut","authors":"M. Hajizadeh-Tahan, M. Ghasemzadeh","doi":"10.22044/JADM.2019.8507.1989","DOIUrl":"https://doi.org/10.22044/JADM.2019.8507.1989","url":null,"abstract":"Learning models and related results depend on the quality of the input data. If raw data is not properly cleaned and structured, the results are tending to be incorrect. Therefore, discretization as one of the preprocessing techniques plays an important role in learning processes. The most important challenge in the discretization process is to reduce the number of features’ values. This operation should be applied in a way that relationships between the features are maintained and accuracy of the classification algorithms would increase. In this paper, a new evolutionary multi-objective algorithm is presented. The proposed algorithm uses three objective functions to achieve high-quality discretization. The first and second objectives minimize the number of selected cut points and classification error, respectively. The third objective introduces a new criterion called the normalized cut, which uses the relationships between their features’ values to maintain the nature of the data. The performance of the proposed algorithm was tested using 20 benchmark datasets. According to the comparisons and the results of nonparametric statistical tests, the proposed algorithm has a better performance than other existing major methods.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}