This paper proposes a workflow for inexperienced designers to create low-poly 3D models using free software. It addresses the problem of the complexity generated by photogrammetry. The solution aims to enable independent developers to create realistic assets at cheap cost. It eliminates the need for experienced 3D artists or expensive commercial solutions.
{"title":"Photogrammetry Workflow for Obtaining Low-polygon 3D Models Using Free Software","authors":"Ricardo Pardo Romero, Inmaculada Remolar Quintana","doi":"10.24132/csrn.3301.42","DOIUrl":"https://doi.org/10.24132/csrn.3301.42","url":null,"abstract":"This paper proposes a workflow for inexperienced designers to create low-poly 3D models using free software. It addresses the problem of the complexity generated by photogrammetry. The solution aims to enable independent developers to create realistic assets at cheap cost. It eliminates the need for experienced 3D artists or expensive commercial solutions.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127212553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel cut-and-paste approach to synthesize a training dataset for shelf item detection, reflecting the alignments of items in the real image dataset. The conventional cut-and-paste approach synthesizes large numbers of training images by pasting foregrounds on background images and is effective for training object detection. However, the previous method pastes foregrounds on random positions of the background, so the alignment of items on shelves is not reflected, and unrealistic images are generated. Generating realistic images that reflect actual positional relationships between items is necessary for efficient learning of item detection. The proposed method determines the pasting positions for the foreground images by referring to the alignment of the items in the real image dataset, so it can generate more realistic images that reflect the alignment of the real-world items. Since our method can synthesize more realistic images, the trained models can perform better.
{"title":"Training Image Synthesis for Shelf Item Detection reflecting Alignments of Items in Real Image Dataset","authors":"Tomokazu Kaneko, Ryosuke Sakai, Soma Shiraishi","doi":"10.24132/csrn.3301.11","DOIUrl":"https://doi.org/10.24132/csrn.3301.11","url":null,"abstract":"We propose a novel cut-and-paste approach to synthesize a training dataset for shelf item detection, reflecting the alignments of items in the real image dataset. The conventional cut-and-paste approach synthesizes large numbers of training images by pasting foregrounds on background images and is effective for training object detection. However, the previous method pastes foregrounds on random positions of the background, so the alignment of items on shelves is not reflected, and unrealistic images are generated. Generating realistic images that reflect actual positional relationships between items is necessary for efficient learning of item detection. The proposed method determines the pasting positions for the foreground images by referring to the alignment of the items in the real image dataset, so it can generate more realistic images that reflect the alignment of the real-world items. Since our method can synthesize more realistic images, the trained models can perform better.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126833634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Energy consumption for computing and using hypersurface curvature in volume dataset analysis and visualization is studied here. Base usage and usage when certain optimization steps, including compiler optimizations and variant memory layout strategies, are considered for both analysis and volume visualization tasks. Focus here is on x86, which is popular and has power measurement capabilities. The work aims to advance understanding of computing"s energy footprint and to provide guidance for energy-responsible volume data analysis.
{"title":"First Considerations in Computing and Using Hypersurface Curvature for Energy Efficiency","authors":"Jacob D. Hauenstein, imothy S. Newman","doi":"10.24132/csrn.3301.22","DOIUrl":"https://doi.org/10.24132/csrn.3301.22","url":null,"abstract":"Energy consumption for computing and using hypersurface curvature in volume dataset analysis and visualization is studied here. Base usage and usage when certain optimization steps, including compiler optimizations and variant memory layout strategies, are considered for both analysis and volume visualization tasks. Focus here is on x86, which is popular and has power measurement capabilities. The work aims to advance understanding of computing\"s energy footprint and to provide guidance for energy-responsible volume data analysis.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124732588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omar Del-Tejo-Catalá, Javier Pérez, J. Guardiola, Alberto J. Perez, J. Pérez-Cortes
Real samples are costly to acquire in many real-world problems. Thus, employing synthetic samples is usually the primary solution to train models that require large amounts of data. However, the difference between synthetically generated and real images, called domain gap, is the most significant hindrance to this solution, as it affects the model’s generalization capacity. Domain adaptation techniques are crucial to train models using synthetic samples. Thus, this article explores different domain adaptation techniques to perform pose estimation from a probabilistic multiview perspective. Probabilistic multiview pose estimation solves the problem of object symmetries, where a single view of an object might not be able to determine the 6D pose of an object, and it must consider its prediction as a distribution of possible candidates. GANs are currently state-of-the-art in domain adaptation. In particular, this paper explores CUT and CycleGAN, which have unique training losses that address the problem of domain adaptation from different perspectives. The datasets explored are a cylinder and a sphere extracted from a Kaggle challenge with perspective-wise symmetries, although they holistically have unique 6D poses. CUT outperforms CycleGAN in feature adaptation, although it is less robust than CycleGAN in keeping keypoints intact after translation, leading to pose prediction errors for some objects. Moreover, this paper found that training the models using synthetic-to-real images and evaluating them with real images improves the model’s accuracy for datasets without complex features. This approach is more suitable for industrial applications to reduce inference overhead.
{"title":"Synthetic-Real Domain Adaptation for Probabilistic Pose Estimation","authors":"Omar Del-Tejo-Catalá, Javier Pérez, J. Guardiola, Alberto J. Perez, J. Pérez-Cortes","doi":"10.24132/csrn.3301.16","DOIUrl":"https://doi.org/10.24132/csrn.3301.16","url":null,"abstract":"Real samples are costly to acquire in many real-world problems. Thus, employing synthetic samples is usually the primary solution to train models that require large amounts of data. However, the difference between synthetically generated and real images, called domain gap, is the most significant hindrance to this solution, as it affects the model’s generalization capacity. Domain adaptation techniques are crucial to train models using synthetic samples. Thus, this article explores different domain adaptation techniques to perform pose estimation from a probabilistic multiview perspective. Probabilistic multiview pose estimation solves the problem of object symmetries, where a single view of an object might not be able to determine the 6D pose of an object, and it must consider its prediction as a distribution of possible candidates. GANs are currently state-of-the-art in domain adaptation. In particular, this paper explores CUT and CycleGAN, which have unique training losses that address the problem of domain adaptation from different perspectives. The datasets explored are a cylinder and a sphere extracted from a Kaggle challenge with perspective-wise symmetries, although they holistically have unique 6D poses. CUT outperforms CycleGAN in feature adaptation, although it is less robust than CycleGAN in keeping keypoints intact after translation, leading to pose prediction errors for some objects. Moreover, this paper found that training the models using synthetic-to-real images and evaluating them with real images improves the model’s accuracy for datasets without complex features. This approach is more suitable for industrial applications to reduce inference overhead.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134253304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arc-length or natural parametrization of curves traverses the shape with unit speed, enabling uniform sampling and straightforward manipulation of functions defined on the geometry. However, Farouki and Sakkalis proved that it is impossible to parametrize a plane or space curve as a rational polynomial of its arc-length, except for the straight line. Nonetheless, it is possible to obtain approximate natural parameterizations that are exact up to any epsilon. If the given family of curves possesses a small number of scalar degrees of freedom, this results in simple approximation formulae applicable in high-performance scenarios. To demonstrate this, we consider the problem of finding the natural parametrization of ellipses and cycloids. This requires the inversion of elliptic integrals of the second kind. To this end, we formulate a two-dimensional approximation problem based on machine-epsilon exact Chebhysev proxies for the exact solutions. We also derive approximate low-rank and low-degree rational natural parametrizations via singular value decomposition. The resulting formulas have minimal memory and computational footprint, making them ideal for computer graphics applications.
{"title":"Low-Rank Rational Approximation of Natural Trochoid Parameterizations","authors":"Csaba Bálint, Gábor Valasek, L. Gergó","doi":"10.24132/csrn.3301.91","DOIUrl":"https://doi.org/10.24132/csrn.3301.91","url":null,"abstract":"Arc-length or natural parametrization of curves traverses the shape with unit speed, enabling uniform sampling and straightforward manipulation of functions defined on the geometry. However, Farouki and Sakkalis proved that it is impossible to parametrize a plane or space curve as a rational polynomial of its arc-length, except for the straight line. Nonetheless, it is possible to obtain approximate natural parameterizations that are exact up to any epsilon. If the given family of curves possesses a small number of scalar degrees of freedom, this results in simple approximation formulae applicable in high-performance scenarios. To demonstrate this, we consider the problem of finding the natural parametrization of ellipses and cycloids. This requires the inversion of elliptic integrals of the second kind. To this end, we formulate a two-dimensional approximation problem based on machine-epsilon exact Chebhysev proxies for the exact solutions. We also derive approximate low-rank and low-degree rational natural parametrizations via singular value decomposition. The resulting formulas have minimal memory and computational footprint, making them ideal for computer graphics applications.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116289874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an approach for visualizing deviations between a 3d printed object and its digital twin. The corresponding 3d visualization for instance allows to highlight particularly critical sections that indicate high deviations along with corresponding annotations. Therefore, the 3d printing thus needs to be reconstructed in 3d, again. However, since the original 3d model that served as blueprint for the 3d printer typically differs topology-wise from the 3d reconstructed model, the corresponding geometries cannot simply be compared on a per-vertex basis. Thus, to be able to easily compare two topologically different geometries, we use a multi-level voxel-based representation for both data sets. Besides using different appearance properties to show deviations, a quantitative comparison of the voxel-sets based on statistical methods is added as input for the visualization. These methods are also compared to determine the best solution in terms of the shape differences and how the results differ, when comparing either voxelized volumes or hulls. The application VoxMesh integrates these concepts into an application and provides the possibility to save the results in form of voxel-sets, meshes and point clouds persistently, that can either be used by third party software or VoxMesh to efficiently reproduce and visualize the results of the shape analysis.
{"title":"Visualization of deviations between different geometries using a multi-level voxel-based representation","authors":"Andreas Dietze, P. Grimm, Yvonne Jung","doi":"10.24132/csrn.3301.18","DOIUrl":"https://doi.org/10.24132/csrn.3301.18","url":null,"abstract":"We present an approach for visualizing deviations between a 3d printed object and its digital twin. The corresponding 3d visualization for instance allows to highlight particularly critical sections that indicate high deviations along with corresponding annotations. Therefore, the 3d printing thus needs to be reconstructed in 3d, again. However, since the original 3d model that served as blueprint for the 3d printer typically differs topology-wise from the 3d reconstructed model, the corresponding geometries cannot simply be compared on a per-vertex basis. Thus, to be able to easily compare two topologically different geometries, we use a multi-level voxel-based representation for both data sets. Besides using different appearance properties to show deviations, a quantitative comparison of the voxel-sets based on statistical methods is added as input for the visualization. These methods are also compared to determine the best solution in terms of the shape differences and how the results differ, when comparing either voxelized volumes or hulls. The application VoxMesh integrates these concepts into an application and provides the possibility to save the results in form of voxel-sets, meshes and point clouds persistently, that can either be used by third party software or VoxMesh to efficiently reproduce and visualize the results of the shape analysis.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122693950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-27DOI: 10.24132/csrn.2019.2901.1.20
Chirine Riachy, Noor Al-Máadeed, Daniel Organisciak, F. Khelifi, A. Bouridane
Despite being often considered less challenging than image-based person re-identification (re-id), video-based person re-id is still appealing as it mimics a more realistic scenario owing to the availability of pedestrian sequences from surveillance cameras. In order to exploit the temporal information provided, a number of feature extraction methods have been proposed. Although the features could be equally learned at a significantly higher computational cost, the scarce nature of labelled re-id datasets encourages the development of robust hand-crafted feature representations as an efficient alternative, especially when novel distance metrics or multi-shot ranking algorithms are to be validated. This paper presents a novel hand-crafted feature representation for video-based person re-id based on a 3-dimensional hierarchical Gaussian descriptor. Compared to similar approaches, the proposed descriptor (i) does not require any walking cycle extraction, hence avoiding the complexity of this task, (ii) can be easily fed into off-shelf learned distance metrics, (iii) and consistently achieves superior performance regardless of the matching method adopted. The performance of the proposed method was validated on PRID2011 and iLIDS-VID datasets outperforming similar methods on both benchmarks.
{"title":"3D Gaussian Descriptor for Video-based Person Re-Identification","authors":"Chirine Riachy, Noor Al-Máadeed, Daniel Organisciak, F. Khelifi, A. Bouridane","doi":"10.24132/csrn.2019.2901.1.20","DOIUrl":"https://doi.org/10.24132/csrn.2019.2901.1.20","url":null,"abstract":"Despite being often considered less challenging than image-based person re-identification (re-id), video-based person re-id is still appealing as it mimics a more realistic scenario owing to the availability of pedestrian sequences from surveillance cameras. In order to exploit the temporal information provided, a number of feature extraction methods have been proposed. Although the features could be equally learned at a significantly higher computational cost, the scarce nature of labelled re-id datasets encourages the development of robust hand-crafted feature representations as an efficient alternative, especially when novel distance metrics or multi-shot ranking algorithms are to be validated. This paper presents a novel hand-crafted feature representation for video-based person re-id based on a 3-dimensional hierarchical Gaussian descriptor. Compared to similar approaches, the proposed descriptor (i) does not require any walking cycle extraction, hence avoiding the complexity of this task, (ii) can be easily fed into off-shelf learned distance metrics, (iii) and consistently achieves superior performance regardless of the matching method adopted. The performance of the proposed method was validated on PRID2011 and iLIDS-VID datasets outperforming similar methods on both benchmarks.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-27DOI: 10.24132/csrn.2019.2901.1.11
Jean-Nicolas Brunet, Vincent Magnoux, B. Ozell, S. Cotin
This paper proposes a fast, stable and accurate meshless method to simulate geometrically non-linear elastic behaviors. To address the inherent limitations of finite element (FE) models, the discretization of the domain is simplified by removing the need to create polyhedral elements. The volumetric locking effect exhibited by incompressible materials in some linear FE models is also completely avoided. Our approach merely requires that the volume of the object be filled with a cloud of points. To minimize numerical errors, we construct a corotational formulation around the quadrature positions that is well suited for large displacements containing small deformations. The equations of motion are integrated in time following an implicit scheme. The convergence rate and accuracy are validated through both stretching and bending case studies. Finally, results are presented using a set of examples that show how we can easily build a realistic physical model of various deformable bodies with little effort spent on the discretization of the domain.
{"title":"Corotated meshless implicit dynamics for deformable bodies","authors":"Jean-Nicolas Brunet, Vincent Magnoux, B. Ozell, S. Cotin","doi":"10.24132/csrn.2019.2901.1.11","DOIUrl":"https://doi.org/10.24132/csrn.2019.2901.1.11","url":null,"abstract":"This paper proposes a fast, stable and accurate meshless method to simulate geometrically non-linear elastic behaviors. To address the inherent limitations of finite element (FE) models, the discretization of the domain is simplified by removing the need to create polyhedral elements. The volumetric locking effect exhibited by incompressible materials in some linear FE models is also completely avoided. Our approach merely requires that the volume of the object be filled with a cloud of points. To minimize numerical errors, we construct a corotational formulation around the quadrature positions that is well suited for large displacements containing small deformations. The equations of motion are integrated in time following an implicit scheme. The convergence rate and accuracy are validated through both stretching and bending case studies. Finally, results are presented using a set of examples that show how we can easily build a realistic physical model of various deformable bodies with little effort spent on the discretization of the domain.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130026001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-27DOI: 10.24132/csrn.2019.2902.2.1
Matěj Karolyi, Jan Krejčí, Jakub Ščavnický, Roman Vyškovský, M. Komenda
Interactive visualisations on the Internet have become commonplace in recent years. Based on such publicly available visualisations, users can obtain information from various domains quickly and easily. A locationspecific method of data presentation can be much more effective using map visualisation than using traditional methods of data visualisation, such as tables or graphs. This paper presents one of the possible ways of creating map visualisations in a modern web environment. In particular, we introduce the technologies used in our case together with their detailed configuration. This description can then serve as a guide for the customisation of the server environment and application settings so that it is easy to create the described type of visualisation outputs. Together with this manual, specific cases are presented on the example of an application which was developed to display the location of medical equipment in the Czech Republic based on data collected from healthcare providers.
{"title":"Tools for development of interactive web-based maps: application in healthcare","authors":"Matěj Karolyi, Jan Krejčí, Jakub Ščavnický, Roman Vyškovský, M. Komenda","doi":"10.24132/csrn.2019.2902.2.1","DOIUrl":"https://doi.org/10.24132/csrn.2019.2902.2.1","url":null,"abstract":"Interactive visualisations on the Internet have become\u0000commonplace in recent years. Based on such publicly available\u0000visualisations, users can obtain information from various\u0000domains quickly and easily. A locationspecific method of data\u0000presentation can be much more effective using map visualisation\u0000than using traditional methods of data visualisation, such as\u0000tables or graphs. This paper presents one of the possible ways\u0000of creating map visualisations in a modern web environment. In\u0000particular, we introduce the technologies used in our case\u0000together with their detailed configuration. This description\u0000can then serve as a guide for the customisation of the server\u0000environment and application settings so that it is easy to\u0000create the described type of visualisation outputs. Together\u0000with this manual, specific cases are presented on the example\u0000of an application which was developed to display the location\u0000of medical equipment in the Czech Republic based on data\u0000collected from healthcare providers.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122866615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.24132/csrn.2021.3002.8
M. Junayed, N. Anjum, A. Noman, Baharul Islam
Skin cancer is one of the most dangerous types of cancers that affect millions of people every year. The detection ofskin cancer in the early stages is an expensive and challenging process. In recent studies, machine learning-basedmethods help dermatologists in classifying medical images. This paper proposes a deep learning-based modelto detect and classify skin cancer using the concept of deep Convolution Neural Network (CNN). Initially, wecollected a dataset that includes four skin cancer image data before applying them in augmentation techniques toincrease the accumulated dataset size. Then, we designed a deep CNN model to train our dataset. On the test data,our model receives 95.98% accuracy that exceeds the two pre-train models, GoogleNet by 1.76% and MobileNetby 1.12%, respectively. The proposed deep CNN model also beats other contemporaneous models while beingcomputationally comparable.
{"title":"A Deep CNN Model for Skin Cancer Detection and Classification","authors":"M. Junayed, N. Anjum, A. Noman, Baharul Islam","doi":"10.24132/csrn.2021.3002.8","DOIUrl":"https://doi.org/10.24132/csrn.2021.3002.8","url":null,"abstract":"Skin cancer is one of the most dangerous types of cancers that affect millions of people every year. The detection ofskin cancer in the early stages is an expensive and challenging process. In recent studies, machine learning-basedmethods help dermatologists in classifying medical images. This paper proposes a deep learning-based modelto detect and classify skin cancer using the concept of deep Convolution Neural Network (CNN). Initially, wecollected a dataset that includes four skin cancer image data before applying them in augmentation techniques toincrease the accumulated dataset size. Then, we designed a deep CNN model to train our dataset. On the test data,our model receives 95.98% accuracy that exceeds the two pre-train models, GoogleNet by 1.76% and MobileNetby 1.12%, respectively. The proposed deep CNN model also beats other contemporaneous models while beingcomputationally comparable.","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128444550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}