Alexandre Pereira Junior, Thiago Pedro Donadon Homem
In the context of current epidemic diseases, this study developed a web application, which can monitor the use of protective masks in public environments. Using the Flask framework in Python language, the application has a control panel to help visualize the obtained data. In the detection process, Haar Cascade algorithm is used to classify faces with and without protective masks. Therefore, the web applications are lightweight, allowing the detection and storage of images captured in the cloud and thte possibility of further data analysis. The classifier presents precision, reversal and f-score of 63%, 93% and 75%, respectively. Although the accuracy is satisfactory, new experiments will be carried out to explore new computer vision technologies, such as the use of deep learning.
{"title":"Application of Artificial Intelligence in Monitoring the Use of Protective Masks","authors":"Alexandre Pereira Junior, Thiago Pedro Donadon Homem","doi":"10.32629/jai.v4i2.500","DOIUrl":"https://doi.org/10.32629/jai.v4i2.500","url":null,"abstract":"In the context of current epidemic diseases, this study developed a web application, which can monitor the use of protective masks in public environments. Using the Flask framework in Python language, the application has a control panel to help visualize the obtained data. In the detection process, Haar Cascade algorithm is used to classify faces with and without protective masks. Therefore, the web applications are lightweight, allowing the detection and storage of images captured in the cloud and thte possibility of further data analysis. The classifier presents precision, reversal and f-score of 63%, 93% and 75%, respectively. Although the accuracy is satisfactory, new experiments will be carried out to explore new computer vision technologies, such as the use of deep learning.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48566717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Felipe Arbeláez-Campillo, Jorge Jesús Villasmil Espinoza, M. J. Rojas-Bahamón
In the 21st century, artificial intelligence is a force that surpasses artificial intelligence in many aspects, because it has appeared in all fields of social life, from the Internet search engine that determines the taste and preference of obtaining digital information to the intelligent refrigerator that can issue purchase orders to maintain its availability when some food is exhausted. The purpose of this paper is to analyze the ethical, ontological and legal problems that may arise from the wide use of artificial intelligence in today’s society, as a preliminary attempt to solve the problems raised in the title. In terms of methodology, this is a paper prepared using written document sources, such as: literary works, international news articles and arbitration articles published in scientific journals. Its conclusion is that AI may change the lifestyle of the whole civilization in many ways, and even negatively change the human condition by changing human identity and genetic integrity, and weaken people’s leading role in building their own realityd.
{"title":"Artificial Intelligence and Human Condition: Opposing Entities or Complementary Forces?","authors":"Diego Felipe Arbeláez-Campillo, Jorge Jesús Villasmil Espinoza, M. J. Rojas-Bahamón","doi":"10.32629/jai.v4i2.497","DOIUrl":"https://doi.org/10.32629/jai.v4i2.497","url":null,"abstract":"In the 21st century, artificial intelligence is a force that surpasses artificial intelligence in many aspects, because it has appeared in all fields of social life, from the Internet search engine that determines the taste and preference of obtaining digital information to the intelligent refrigerator that can issue purchase orders to maintain its availability when some food is exhausted. The purpose of this paper is to analyze the ethical, ontological and legal problems that may arise from the wide use of artificial intelligence in today’s society, as a preliminary attempt to solve the problems raised in the title. In terms of methodology, this is a paper prepared using written document sources, such as: literary works, international news articles and arbitration articles published in scientific journals. Its conclusion is that AI may change the lifestyle of the whole civilization in many ways, and even negatively change the human condition by changing human identity and genetic integrity, and weaken people’s leading role in building their own realityd.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45579433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media check-in data contains a lot of user activity information. Understanding the types of activities and behavior of social media users has important research significance for exploring human mobility and behavior patterns. This paper studies the user activity classification method for Sina Weibo (a very popular Chinese social network service, referred to as “Weibo”), which combines image expression and spatiotemporal data classification technology to realize the identification of the activity behavior represented by the microblog check-in data. Firstly, the user activities represented by the Sina Weibo check-in data are divided into six categories according to POI attribute information: “catering”, “life services”, “campus”, “outdoors”, “entertainment” and “travel”; Then, through the Convolutional Neural Network (CNN) and K-Nearest Neighbor (KNN) classification methods, the image scene information and spatiotemporal information in the check-in data are fused to classify the activity behavior of microblog users. The experimental results show that the proposed method can significantly improve the accuracy of microblog user activity type recognition and provide more effective data support for accurately exploring human behavior activities.
{"title":"Spatiotemporal Information Fusion Method of User and Social Media Activity","authors":"Chao Yang, Liu Yang, Kunlun Qi","doi":"10.32629/jai.v4i2.485","DOIUrl":"https://doi.org/10.32629/jai.v4i2.485","url":null,"abstract":"Social media check-in data contains a lot of user activity information. Understanding the types of activities and behavior of social media users has important research significance for exploring human mobility and behavior patterns. This paper studies the user activity classification method for Sina Weibo (a very popular Chinese social network service, referred to as “Weibo”), which combines image expression and spatiotemporal data classification technology to realize the identification of the activity behavior represented by the microblog check-in data. Firstly, the user activities represented by the Sina Weibo check-in data are divided into six categories according to POI attribute information: “catering”, “life services”, “campus”, “outdoors”, “entertainment” and “travel”; Then, through the Convolutional Neural Network (CNN) and K-Nearest Neighbor (KNN) classification methods, the image scene information and spatiotemporal information in the check-in data are fused to classify the activity behavior of microblog users. The experimental results show that the proposed method can significantly improve the accuracy of microblog user activity type recognition and provide more effective data support for accurately exploring human behavior activities.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41541392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Urdu is Pakistan 's national language. However, Chinese expertise is very negligible in Pakistan and the Asian nations. Yet fewer research has been undertaken in the area of computer translation on Chinese to Urdu. In order to solve the above problems, we designed of an electronic dictionary for Chinese-Urdu, and studied the sentence-level machine translation technology which is based on deep learning. The Design of an electronic dictionary Chinese-Urdu machine translation system we collected and constructed an electronic dictionary containing 24000 entries from Chinese to Urdu. For Sentence we used English as an intermediate language, and based on the existing parallel corpus of Chinese to English and English to Urdu, we constructed a bilingual parallel corpus containing 66000 sentences from Chinese to Urdu. The Corpus has trained by using two NMT Models (LSTM,Transformer Model) and the above two translation model were compared to the desired translation, with the help of bilingual valuation understudy (BLEU) score. On NMT, The LSTM Model is gain of 0.067 to 0.41 in BLEU score while on Transformer model, there is gain of 0.077 to 0.52 in BLEU which is better than from LSTM Model score. Furthermore, we compared the proposed model with Google and Microsoft translation.
{"title":"Research Chinese-Urdu Machine Translation Based on Deep Learning","authors":"Zeshan Ali","doi":"10.32629/jai.v3i2.279","DOIUrl":"https://doi.org/10.32629/jai.v3i2.279","url":null,"abstract":"Urdu is Pakistan 's national language. However, Chinese expertise is very negligible in Pakistan and the Asian nations. Yet fewer research has been undertaken in the area of computer translation on Chinese to Urdu. In order to solve the above problems, we designed of an electronic dictionary for Chinese-Urdu, and studied the sentence-level machine translation technology which is based on deep learning. The Design of an electronic dictionary Chinese-Urdu machine translation system we collected and constructed an electronic dictionary containing 24000 entries from Chinese to Urdu. For Sentence we used English as an intermediate language, and based on the existing parallel corpus of Chinese to English and English to Urdu, we constructed a bilingual parallel corpus containing 66000 sentences from Chinese to Urdu. The Corpus has trained by using two NMT Models (LSTM,Transformer Model) and the above two translation model were compared to the desired translation, with the help of bilingual valuation understudy (BLEU) score. On NMT, The LSTM Model is gain of 0.067 to 0.41 in BLEU score while on Transformer model, there is gain of 0.077 to 0.52 in BLEU which is better than from LSTM Model score. Furthermore, we compared the proposed model with Google and Microsoft translation.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46974815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the problems of long positioning time and poor positioning accuracy in traditional positioning systems, a WeChat applet QR code area positioning system based on the LBS cloud platform is proposed and designed. The overall architecture of the system is divided into three parts: LBS cloud service, central data processing, and QR code positioning terminal for small programs. The hardware is designed from the server-side module, processor and positioning module to provide a basis for system construction. In the software design, the WeChat applet QR code area image is collected, the image edge features are enhanced and filtered, the positioning target is determined according to the processed image edge features, and the WeChat applet QR code area positioning system design is completed. The experimental results show that the positioning time of the system is equivalent to 50% of the traditional system, and the positioning accuracy is always maintained above 99.5%, which has significant advantages.
{"title":"The QR code intelligent positioning system of the LBS cloud platform in the Internet of things environment","authors":"Xinyue Wang, Haibao Wang","doi":"10.32629/jai.v3i2.338","DOIUrl":"https://doi.org/10.32629/jai.v3i2.338","url":null,"abstract":"Aiming at the problems of long positioning time and poor positioning accuracy in traditional positioning systems, a WeChat applet QR code area positioning system based on the LBS cloud platform is proposed and designed. The overall architecture of the system is divided into three parts: LBS cloud service, central data processing, and QR code positioning terminal for small programs. The hardware is designed from the server-side module, processor and positioning module to provide a basis for system construction. In the software design, the WeChat applet QR code area image is collected, the image edge features are enhanced and filtered, the positioning target is determined according to the processed image edge features, and the WeChat applet QR code area positioning system design is completed. The experimental results show that the positioning time of the system is equivalent to 50% of the traditional system, and the positioning accuracy is always maintained above 99.5%, which has significant advantages.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47153543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The first text of José Ortega y Gasset thinking about technology was published in 1935. Nearly a century later, this paper attempts to save a concept put forward by Spanish philosophers in Meditación de la técnica, that is: supernatural. Today, the biggest challenge facing technology is to maximize artificial intelligence and make it a means to challenge the restrictions imposed by nature. One of the most prominent suggestions in the field of artificial systems is superintelligence and uniqueness, which are the two most desired wishes of thinkers such as Nick Bostrom or Raymond Kurzweil. Therefore, if the field of technology is vigorously developing artificial intelligence, we should ask ourselves whether the motivation behind this momentum is really based on human needs for supernatural phenomena, which Ortega y Gasset have been talking about.
JoséOrtega y Gasset关于技术的第一篇文章发表于1935年。近一个世纪后,本文试图挽救西班牙哲学家在Meditación de la técnica提出的一个概念,即超自然。今天,技术面临的最大挑战是最大限度地利用人工智能,使其成为挑战自然限制的手段。人工系统领域最突出的建议之一是超智能和独特性,这是尼克·博斯特罗姆或雷蒙德·库兹韦尔等思想家最渴望的两个愿望。因此,如果技术领域正在大力发展人工智能,我们应该问问自己,这种势头背后的动机是否真的是基于人类对超自然现象的需求,奥尔特加和加塞特一直在谈论这一点。
{"title":"The Status Quo of José Ortega y Gasset’s Supernatural Concepts: From the Perspective of Artificial Intelligence","authors":"Antonio Luis Terrones Rodríguez","doi":"10.32629/jai.v4i1.494","DOIUrl":"https://doi.org/10.32629/jai.v4i1.494","url":null,"abstract":"The first text of José Ortega y Gasset thinking about technology was published in 1935. Nearly a century later, this paper attempts to save a concept put forward by Spanish philosophers in Meditación de la técnica, that is: supernatural. Today, the biggest challenge facing technology is to maximize artificial intelligence and make it a means to challenge the restrictions imposed by nature. One of the most prominent suggestions in the field of artificial systems is superintelligence and uniqueness, which are the two most desired wishes of thinkers such as Nick Bostrom or Raymond Kurzweil. Therefore, if the field of technology is vigorously developing artificial intelligence, we should ask ourselves whether the motivation behind this momentum is really based on human needs for supernatural phenomena, which Ortega y Gasset have been talking about.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43579959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Various researches in the field of robotics have made great progress in developing methods to effectively determine the position of robots in unknown environments. The simultaneous localization and mapping (SLAM) task make determining the current position of the robot and performing path mapping possible. In this mapping, solid elements (landmarks) existing in the actual environment are even detected, which indicate that the direction of the robot changes during walking. This scheme provides the implementation analysis of the probabilistic particle filter method, which ensures the correct performance in the controlled actual scene under specific conditions, obtains the non-network connection environment information by storing the data in the temperature value sampling in the CVS file, and monitors the temperature measurement by displaying the heat map. Successful analysis must ensure the robustness of the results obtained when implementing these systems and take into account the feasibility of applying this work to the proposed objectivesd.
{"title":"The Implementation of Hexagonal Robot Mapping and Positioning System Focuses on Environmental Scanning and Temperature Monitoring","authors":"Cristina Alvarado-Torres, Esteban Velarde-Garcés, Orlando Barcia-Ayala","doi":"10.32629/jai.v4i1.493","DOIUrl":"https://doi.org/10.32629/jai.v4i1.493","url":null,"abstract":"Various researches in the field of robotics have made great progress in developing methods to effectively determine the position of robots in unknown environments. The simultaneous localization and mapping (SLAM) task make determining the current position of the robot and performing path mapping possible. In this mapping, solid elements (landmarks) existing in the actual environment are even detected, which indicate that the direction of the robot changes during walking. This scheme provides the implementation analysis of the probabilistic particle filter method, which ensures the correct performance in the controlled actual scene under specific conditions, obtains the non-network connection environment information by storing the data in the temperature value sampling in the CVS file, and monitors the temperature measurement by displaying the heat map. Successful analysis must ensure the robustness of the results obtained when implementing these systems and take into account the feasibility of applying this work to the proposed objectivesd.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49220711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fractional order modeling method of robot dynamics with one, two and three degrees of freedom is introduced. The stability of the fractional order model is proved by using the second-order Lyapunov method. A basic physical parameter is considered, that is, the inertial mass of the connecting rod. Freecad software is used for mechanical design. The dynamic models of 2-DOF and 3-DOF robots are established, and their motion trajectories are given in plane (x, y) and space (x, y, z) respectively. The model is programmed on the development card based on microcontroller. The advantage of the development card lies in its peripheral output,because it has two analog output channels,which are sent to the oscilloscope. The results are consistent with the proposed model.
介绍了一自由度、二自由度和三自由度机器人动力学的分数阶建模方法。利用二阶Lyapunov方法证明了分数阶模型的稳定性。考虑一个基本的物理参数,即连杆的惯性质量。采用Freecad软件进行机械设计。建立了2-DOF和3-DOF机器人的动力学模型,给出了它们在平面(x, y)和空间(x, y, z)上的运动轨迹。在基于单片机的开发卡上对模型进行编程。开发卡的优点在于它的外围输出,因为它有两个模拟输出通道,这些输出通道被发送到示波器。结果与所提出的模型一致。
{"title":"Fractional Order Modeling of 1,2,3 DOF Robot Dynamic","authors":"Israel Cerón-Morales","doi":"10.32629/jai.v4i1.490","DOIUrl":"https://doi.org/10.32629/jai.v4i1.490","url":null,"abstract":"<p class=\"15\">The fractional order modeling method of robot dynamics with one, two and three degrees of freedom is introduced. The stability of the fractional order model is proved by using the second-order Lyapunov method. A basic physical parameter is considered, that is, the inertial mass of the connecting rod. Freecad software is used for mechanical design. The dynamic models of 2-<span style=\"font-family: 'Times New Roman';\">DOF</span> and 3-<span style=\"font-family: 'Times New Roman';\">DOF</span> robots are established, and their motion trajectories are given in plane (x, y) and space (x, y, z) respectively. The model is programmed on the development card based on microcontroller. The advantage of the development card lies in its peripheral output<span style=\"font-family: 'Times New Roman';\">,</span> <span style=\"font-family: 'Times New Roman';\">because</span> it has two analog output channels<span style=\"font-family: 'Times New Roman';\">,</span> <span style=\"font-family: 'Times New Roman';\">which are sent</span> to the oscilloscope. The results are consistent with the proposed model.</p>","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45604068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a digital era, traditional areas like Human Resources have to adapt themselves to stay alive and competitive. The processes have been drastically changing from paper and talks into systems and workflows. Data is now more than ever in the spotlight and have become an essential asset to ensure delivery, performance, quality and predictability. But first, data has to be organized, combined, verified, treated and transformed to become meaningful information, not forgetting automatized to be delivered in time and supporting decision making in a daily basis. Business Intelligence (BI) is the tool capable to do it and we are the minds to pull it off.
{"title":"Data Analytics to Increase Performance in the Human Resources Area","authors":"Sergio Henrique Monte Santo Andrade","doi":"10.32629/jai.v4i1.80","DOIUrl":"https://doi.org/10.32629/jai.v4i1.80","url":null,"abstract":"In a digital era, traditional areas like Human Resources have to adapt themselves to stay alive and competitive. The processes have been drastically changing from paper and talks into systems and workflows. Data is now more than ever in the spotlight and have become an essential asset to ensure delivery, performance, quality and predictability. But first, data has to be organized, combined, verified, treated and transformed to become meaningful information, not forgetting automatized to be delivered in time and supporting decision making in a daily basis. Business Intelligence (BI) is the tool capable to do it and we are the minds to pull it off.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41469281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongzhang Zhou, Jun Wang, R. Zuo, Fan Xiao, W. Shen, Shugong Wang
Geological big data is growing exponentially. Only by developing intelligent data processing methods can we catch up with the extraordinary growth of big data. Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent. Machine learning has become the frontier hotspot of geological big data research. It will make geological big data winged and change geology. Machine learning is a training process of model derived from data, and it eventually gives a decision oriented to a certain performance measurement. Deep learning is an important subclass of machine learning research. It learns more useful features by building machine learning models with many hidden layers and massive training data, so as to improve the accuracy of classification or prediction at last. Convolutional neural network algorithm is one of the most commonly used deep learning algorithms. It is widely used in image recognition and speech analysis. Python language plays an increasingly important role in the field of science. Scikit-Learn is a bank related to machine learning, which provides algorithms such as data preprocessing, classification, regression, clustering, prediction and model analysis. Keras is a deep learning bank based on Theano/Tensorflow, which can be applied to build a simple artificial neural network.
{"title":"Machine Learning, Deep Learning and Implementation Language in Geological Field","authors":"Yongzhang Zhou, Jun Wang, R. Zuo, Fan Xiao, W. Shen, Shugong Wang","doi":"10.32629/jai.v4i1.479","DOIUrl":"https://doi.org/10.32629/jai.v4i1.479","url":null,"abstract":"Geological big data is growing exponentially. Only by developing intelligent data processing methods can we catch up with the extraordinary growth of big data. Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent. Machine learning has become the frontier hotspot of geological big data research. It will make geological big data winged and change geology. Machine learning is a training process of model derived from data, and it eventually gives a decision oriented to a certain performance measurement. Deep learning is an important subclass of machine learning research. It learns more useful features by building machine learning models with many hidden layers and massive training data, so as to improve the accuracy of classification or prediction at last. Convolutional neural network algorithm is one of the most commonly used deep learning algorithms. It is widely used in image recognition and speech analysis. Python language plays an increasingly important role in the field of science. Scikit-Learn is a bank related to machine learning, which provides algorithms such as data preprocessing, classification, regression, clustering, prediction and model analysis. Keras is a deep learning bank based on Theano/Tensorflow, which can be applied to build a simple artificial neural network.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49589469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}