The use of decision support systems is a very effective way to facilitate and improve the quality of decision-making, as well as resolving related issues in various areas. Algeria is currently facing difficulties with decision-making process due to the lack of visibility on the informational legacy of the economic institutions, the diversity information sources of, and the large amount of complex data which change and evolve constantly. In this paper, we propose a decision-making system for economic policy makers. The proposed system is scalable, and incremental allowing to introduce gradually new sectors of economic assets in order to cover all the Algerian territory. To do so, several methods and approaches should be used for the process of Data collection from information sources, the implementation of the DW (Data Warehouse), and finally the restitution via the reporting process. A decision support system intended for highlighting the country's economic wealth through statistical reports and maps is currently being designed and developed.
{"title":"Scalable Proposed Decision Support System: Algerian economic investment support","authors":"Kahina Semar-Bitah, Chahrazed Tarabet, Meriem Mouzai, Abderrahim Chetibi","doi":"10.1145/3361570.3361613","DOIUrl":"https://doi.org/10.1145/3361570.3361613","url":null,"abstract":"The use of decision support systems is a very effective way to facilitate and improve the quality of decision-making, as well as resolving related issues in various areas. Algeria is currently facing difficulties with decision-making process due to the lack of visibility on the informational legacy of the economic institutions, the diversity information sources of, and the large amount of complex data which change and evolve constantly. In this paper, we propose a decision-making system for economic policy makers. The proposed system is scalable, and incremental allowing to introduce gradually new sectors of economic assets in order to cover all the Algerian territory. To do so, several methods and approaches should be used for the process of Data collection from information sources, the implementation of the DW (Data Warehouse), and finally the restitution via the reporting process. A decision support system intended for highlighting the country's economic wealth through statistical reports and maps is currently being designed and developed.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122596319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper the design and implementation of an interactive mobile application is preparedwithAndroid framework. The proposed system (MQAP), apply quiz sessions in the classroom to the various technical topics. It aims to create a quiz and colonize it with questions and answers. When the student takes the exam, the application will generate random questions for him/her; the students will be able to see their results as soon as they finish the test. Also, the students can see their previous quizzes and their marks. Whenever the Wi-Fi connects to the Internet, in the Android mobiles, users will receive immediate feedback after giving their answers. Several software packages were used to implement MQAP, including the Android Studio, Java programming language, SQL Server 2012. The home pages of the admin and the examiner were designed using visual studio and C# programming language. Finally, the paper describes, in detail, the architecture of the proposed system, its modular internal structure, and the database schema.
本文在android框架下设计并实现了一个交互式移动应用程序。拟议的系统(MQAP)将课堂上的测验应用于各种技术主题。它的目标是创建一个测验,并用问题和答案来填充它。当学生参加考试时,应用程序会随机为他/她生成问题;学生们一完成考试就能看到他们的成绩。此外,学生可以看到他们以前的测验和他们的分数。在Android手机中,只要Wi-Fi连接到互联网,用户在给出答案后就会得到即时的反馈。MQAP的实现使用了几个软件包,包括Android Studio、Java编程语言、SQL Server 2012。管理员和审查员的主页使用visual studio和c#编程语言进行设计。最后,详细介绍了系统的体系结构、模块化的内部结构和数据库模式。
{"title":"Mobile Quiz on Android Platform (MQAP)","authors":"H. Esmaeel, N. K. Abbood, Aumama MohammedFarhan","doi":"10.1145/3361570.3361612","DOIUrl":"https://doi.org/10.1145/3361570.3361612","url":null,"abstract":"In this paper the design and implementation of an interactive mobile application is preparedwithAndroid framework. The proposed system (MQAP), apply quiz sessions in the classroom to the various technical topics. It aims to create a quiz and colonize it with questions and answers. When the student takes the exam, the application will generate random questions for him/her; the students will be able to see their results as soon as they finish the test. Also, the students can see their previous quizzes and their marks. Whenever the Wi-Fi connects to the Internet, in the Android mobiles, users will receive immediate feedback after giving their answers. Several software packages were used to implement MQAP, including the Android Studio, Java programming language, SQL Server 2012. The home pages of the admin and the examiner were designed using visual studio and C# programming language. Finally, the paper describes, in detail, the architecture of the proposed system, its modular internal structure, and the database schema.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122600505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Hajjar, Mazen El-Sayed, Abd El Salam Al Hajjar, B. Daya
Electroencephalogram (EEG) is a signal that measures the electrical activity of the brain. It contains some specific patterns that predict neuro-developmental impairments of a premature newborn. Extracting these patterns from a set of EEG records provides a dataset to be used in machine learning in order to implement an intelligent classification system that predicts prognosis of the baby. In a previous work, we proved that inter-burst intervals (IBI) found in the EEG records as well as low amplitude Burst predicts abnormal outcomes of the premature. According to this hypothesis, we defined 20 parameters in EEG signal at birth to propose an efficient automatic classification system that predicts a risk on cerebral maturation at birth that can lead to a pathological state at 2 years. In this paper, we use correlation analysis between the 20 EEG parameters to find the redundant sets of them and eliminate those that are less correlated with the class, thereby reduce their number. To do this, we calculate the correlation coefficients between all the attributes to find their correlation matrix. Next, we choose the attribute sets with a correlation greater than 90% to find the parameters that give close results. Then among these parameters, we find the correlation between each of them with the class to determine which is the less important to eliminate it. Finally, we reduce the number of parameters to 17, and enhance the accuracy of the proposed classification system from 88,4% to 93,2%. This system has a good sensitivity to predict the neurological status of preterm infants and can be used as a decision aid in clinical treatment.
{"title":"Correlation analysis between EEG parameters to enhance the performance of intelligent predictive models for the neonatal newborn sick effects","authors":"Y. Hajjar, Mazen El-Sayed, Abd El Salam Al Hajjar, B. Daya","doi":"10.1145/3361570.3361615","DOIUrl":"https://doi.org/10.1145/3361570.3361615","url":null,"abstract":"Electroencephalogram (EEG) is a signal that measures the electrical activity of the brain. It contains some specific patterns that predict neuro-developmental impairments of a premature newborn. Extracting these patterns from a set of EEG records provides a dataset to be used in machine learning in order to implement an intelligent classification system that predicts prognosis of the baby. In a previous work, we proved that inter-burst intervals (IBI) found in the EEG records as well as low amplitude Burst predicts abnormal outcomes of the premature. According to this hypothesis, we defined 20 parameters in EEG signal at birth to propose an efficient automatic classification system that predicts a risk on cerebral maturation at birth that can lead to a pathological state at 2 years. In this paper, we use correlation analysis between the 20 EEG parameters to find the redundant sets of them and eliminate those that are less correlated with the class, thereby reduce their number. To do this, we calculate the correlation coefficients between all the attributes to find their correlation matrix. Next, we choose the attribute sets with a correlation greater than 90% to find the parameters that give close results. Then among these parameters, we find the correlation between each of them with the class to determine which is the less important to eliminate it. Finally, we reduce the number of parameters to 17, and enhance the accuracy of the proposed classification system from 88,4% to 93,2%. This system has a good sensitivity to predict the neurological status of preterm infants and can be used as a decision aid in clinical treatment.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125434332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the first generation of interventions and home support, the needs were triggered generally by phone call and they are described orally. In most cases, there is no registered information about the patient's conditions or medical history. The interveners are either from the private or state health sector. Despite the fact that there are people who can intervene quicker than others, yet they have not been recruited, or work as professional liberal. With the development of technology, systems based on data mining or artificial intelligence have been developed to focus on the intervention's time for example. Although intervention and home-based care are the subject of many studies [1], the resolution of the overall decision-making problem is not sufficiently developed. On the one hand, the state of health presupposes the definition of a patient. There is a set of parameters characterizing the habits of daily life of the person analyzed in parallel with the evolution of physiological and environment. On the other hand, it is necessary to take into consideration the medical Core, his location, his profile not only his professional status but also his abilities and skills that are not explicitly described in his curriculum. Different studies and systems exist in the literature [2]. Each of his studies tackles only a part of the parameters. Indeed, these studies consider either the monitoring of daily activities, the monitoring of physiological data or other environmental parts. Either they consider the specificities of the medical Core's profile, or these systems use a probabilistic data mining that involves many interactions with the experts to interpret the data, either wise an expert system based on the inference rules defined by the medical experts. In addition, most systems do not use controlled vocabulary that provides semantics needed. This complicates information sharing and collaborative work. The objective of the e-SAAD project is to propose a methodological process to facilitate the analysis and procedure of intervention systems and home support. The process should identify the generic and specific aspects of each part. The patient's data set, profile, history, its environment and location should be taken into consideration. As well as the service providers, their profiles, their skills and essentially their availability and locations. These models must be open to be adapted to new data sources.
{"title":"e-SAAD system: Ontologies based approach for home Care Services platform","authors":"Abdelweheb Gueddes, M. Mahjoub","doi":"10.1145/3361570.3361597","DOIUrl":"https://doi.org/10.1145/3361570.3361597","url":null,"abstract":"In the first generation of interventions and home support, the needs were triggered generally by phone call and they are described orally. In most cases, there is no registered information about the patient's conditions or medical history. The interveners are either from the private or state health sector. Despite the fact that there are people who can intervene quicker than others, yet they have not been recruited, or work as professional liberal. With the development of technology, systems based on data mining or artificial intelligence have been developed to focus on the intervention's time for example. Although intervention and home-based care are the subject of many studies [1], the resolution of the overall decision-making problem is not sufficiently developed. On the one hand, the state of health presupposes the definition of a patient. There is a set of parameters characterizing the habits of daily life of the person analyzed in parallel with the evolution of physiological and environment. On the other hand, it is necessary to take into consideration the medical Core, his location, his profile not only his professional status but also his abilities and skills that are not explicitly described in his curriculum. Different studies and systems exist in the literature [2]. Each of his studies tackles only a part of the parameters. Indeed, these studies consider either the monitoring of daily activities, the monitoring of physiological data or other environmental parts. Either they consider the specificities of the medical Core's profile, or these systems use a probabilistic data mining that involves many interactions with the experts to interpret the data, either wise an expert system based on the inference rules defined by the medical experts. In addition, most systems do not use controlled vocabulary that provides semantics needed. This complicates information sharing and collaborative work. The objective of the e-SAAD project is to propose a methodological process to facilitate the analysis and procedure of intervention systems and home support. The process should identify the generic and specific aspects of each part. The patient's data set, profile, history, its environment and location should be taken into consideration. As well as the service providers, their profiles, their skills and essentially their availability and locations. These models must be open to be adapted to new data sources.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"6 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134190931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mostefa Hamdani, Youcef Aklouf, Hadj Ahmed Bouarara
Cloud Computing is a modern paradigm that provides its users with any kind of service. A large number of users is attracted to this technology seeking more satisfaction. Load Balancing is a very important area of research for efficient use of IT resources. Many techniques have been proposed to improve the overall performance of the cloud, and to provide the user with more satisfactory and efficient services. In this paper, we propose a Load Balancing algorithm based on the weights of servers in the cloud platform. We suggest the use of the fuzzy logic to represent the weight of the different nodes. Moreover, we implement separate requests in. The results show that this approach improves the Load Balancing process effectively
{"title":"Improved fuzzy Load-Balancing Algorithm for Cloud Computing System","authors":"Mostefa Hamdani, Youcef Aklouf, Hadj Ahmed Bouarara","doi":"10.1145/3361570.3361589","DOIUrl":"https://doi.org/10.1145/3361570.3361589","url":null,"abstract":"Cloud Computing is a modern paradigm that provides its users with any kind of service. A large number of users is attracted to this technology seeking more satisfaction. Load Balancing is a very important area of research for efficient use of IT resources. Many techniques have been proposed to improve the overall performance of the cloud, and to provide the user with more satisfactory and efficient services. In this paper, we propose a Load Balancing algorithm based on the weights of servers in the cloud platform. We suggest the use of the fuzzy logic to represent the weight of the different nodes. Moreover, we implement separate requests in. The results show that this approach improves the Load Balancing process effectively","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121900622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chicken meat production is in demand now a days because of the increasing population around the world. Monitoring of a chicken farm could be very helpful to produce a great number of chicken meat and to meet the demand of the people. However, there are different environmental factors that are affecting the growth of a chicken that also affects the production of chicken meat for food consumption. This study aimed to determine the different sensors and materials that can be included in the system architecture for smart chicken farming which focus on the monitoring of the environmental parameter literature survey. The result of the literature survey was use as basis in the design of the system architecture for a Smart Chicken Farming. Based on the literature survey, there are different sensors and materials that could be used to monitor the environmental parameter in a chicken farm. Thus, the following factors areconsidered such as low cost, effectiveness and positive used of the sensors and materials that is the reason that they are included in the system architecture.
{"title":"Chicken Farm Monitoring System Using Sensors and Arduino Microcontroller","authors":"J. G. Bea, Josephine S. Dela Cruz","doi":"10.1145/3361570.3361607","DOIUrl":"https://doi.org/10.1145/3361570.3361607","url":null,"abstract":"Chicken meat production is in demand now a days because of the increasing population around the world. Monitoring of a chicken farm could be very helpful to produce a great number of chicken meat and to meet the demand of the people. However, there are different environmental factors that are affecting the growth of a chicken that also affects the production of chicken meat for food consumption. This study aimed to determine the different sensors and materials that can be included in the system architecture for smart chicken farming which focus on the monitoring of the environmental parameter literature survey. The result of the literature survey was use as basis in the design of the system architecture for a Smart Chicken Farming. Based on the literature survey, there are different sensors and materials that could be used to monitor the environmental parameter in a chicken farm. Thus, the following factors areconsidered such as low cost, effectiveness and positive used of the sensors and materials that is the reason that they are included in the system architecture.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129615103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel framework for an image saliency assessment based on an adaptive model. This later evaluates the image-content importance using a tuning strategy based on information theoretic concepts coupled with wavelet multiscale image representation. Our saliency-based encoding form can greatly characterize both regular/irregular structures within the image. The performance of the proposed model is benchmarked with those available in the literature. The saliency assessment mechanism we propose can potentially offer up to very promising results.
{"title":"Saliency Assessment Using Selective Features Based on Entropy and Wavelet Transform","authors":"Bachir Kaddar, H. Fizazi, D. Mansouri","doi":"10.1145/3361570.3361583","DOIUrl":"https://doi.org/10.1145/3361570.3361583","url":null,"abstract":"In this paper, we present a novel framework for an image saliency assessment based on an adaptive model. This later evaluates the image-content importance using a tuning strategy based on information theoretic concepts coupled with wavelet multiscale image representation. Our saliency-based encoding form can greatly characterize both regular/irregular structures within the image. The performance of the proposed model is benchmarked with those available in the literature. The saliency assessment mechanism we propose can potentially offer up to very promising results.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132264247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Aidoud, M. Sedraoui, Chams-Eddine Feraga, A. Sebbagh
In this paper, we propose a method of robustification of the initial polynomial form of the generalized predictive law (GPC) for a permanent magnet synchronous machine (PMSM). For this, the procedure consists of the following three steps: First, synthesis of an initial predictive controller to ensure a better follow-up of the properties of the closed-loop system. Secondly, a robust H∞ controller is synthesized by solving the mixed sensitivity problem by using the two Riccati equations to ensure a better dynamics in regulation. Third, the two previous controllers are combined, using Youla's parameterization to determine a robustified GPC controller. This controller should simultaneously satisfy the same better tracking dynamics of the initial GPC predictive controller. In addition, it should provide the same robustness of the robust H∞ controller. To validate the efficiency of this method, a permanent magnet synchronous machine (PMSM), which presents a real process, The dynamic behavior of the proposed process is modeled by an uncertain model. In our case the nominal model is used for the synthesis of the GPC controller with and without noise, thus used for the synthesis of the controller H∞. The system is controlled by the three previous controllers where their results are compared in the time and frequency domains.
{"title":"Robustification of the generalized predictive law (GPC) by the implicit application of the H∞ method","authors":"M. Aidoud, M. Sedraoui, Chams-Eddine Feraga, A. Sebbagh","doi":"10.1145/3361570.3361601","DOIUrl":"https://doi.org/10.1145/3361570.3361601","url":null,"abstract":"In this paper, we propose a method of robustification of the initial polynomial form of the generalized predictive law (GPC) for a permanent magnet synchronous machine (PMSM). For this, the procedure consists of the following three steps: First, synthesis of an initial predictive controller to ensure a better follow-up of the properties of the closed-loop system. Secondly, a robust H∞ controller is synthesized by solving the mixed sensitivity problem by using the two Riccati equations to ensure a better dynamics in regulation. Third, the two previous controllers are combined, using Youla's parameterization to determine a robustified GPC controller. This controller should simultaneously satisfy the same better tracking dynamics of the initial GPC predictive controller. In addition, it should provide the same robustness of the robust H∞ controller. To validate the efficiency of this method, a permanent magnet synchronous machine (PMSM), which presents a real process, The dynamic behavior of the proposed process is modeled by an uncertain model. In our case the nominal model is used for the synthesis of the GPC controller with and without noise, thus used for the synthesis of the controller H∞. The system is controlled by the three previous controllers where their results are compared in the time and frequency domains.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121849437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joni O. Salminen, Juan Corporan, Roope Marttila, Tommi Salenius, B. Jansen
We use machine learning to predict the search engine rank of webpages. We use a list of keywords for 30 content blogs of an e-commerce company in the gift industry to retrieve 733 content pages occupying the first-page Google rankings and predict their rank using 30 ranking factors. We test two models, Light Gradient Boosting Machine (LightGBM) and Extreme Gradient Boosted Decision Trees (XGBoost), finding that XGBoost performs better for predicting actual search rankings, with an average accuracy of 0.86. The feature analysis shows the most impactful features are (a) internal and external links, (b) security of the web domain, and (c) length of H3 headings, and the least impactful features are (a) keyword mentioned in domain address, (b) keyword mentioned in the H1 headings, and (c) overall number of keyword mentions in the text. The results highlight the persistent importance of links in search-engine optimization. We provide actionable insights for online marketers and content creators.
{"title":"Using Machine Learning to Predict Ranking of Webpages in the Gift Industry: Factors for Search-Engine Optimization","authors":"Joni O. Salminen, Juan Corporan, Roope Marttila, Tommi Salenius, B. Jansen","doi":"10.1145/3361570.3361578","DOIUrl":"https://doi.org/10.1145/3361570.3361578","url":null,"abstract":"We use machine learning to predict the search engine rank of webpages. We use a list of keywords for 30 content blogs of an e-commerce company in the gift industry to retrieve 733 content pages occupying the first-page Google rankings and predict their rank using 30 ranking factors. We test two models, Light Gradient Boosting Machine (LightGBM) and Extreme Gradient Boosted Decision Trees (XGBoost), finding that XGBoost performs better for predicting actual search rankings, with an average accuracy of 0.86. The feature analysis shows the most impactful features are (a) internal and external links, (b) security of the web domain, and (c) length of H3 headings, and the least impactful features are (a) keyword mentioned in domain address, (b) keyword mentioned in the H1 headings, and (c) overall number of keyword mentions in the text. The results highlight the persistent importance of links in search-engine optimization. We provide actionable insights for online marketers and content creators.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134389104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moustapha Cissé, Marie Ndiaye, Khalifa Gaye, O. Sall, Mouhamadou Saliou Diallo
The relevance of the information provided, its intelligibility and its adaptation to customer usage and preferences are key factors in the rejection or success of an information system. Thus, in a context of information superabundance and heterogeneity, the implementation of mechanisms for personalizing information has become essential. Personalizing information means adapting it to In most information personalization mechanisms, there is agreement on the usefulness of having a user profile. The latter can be seen as a factoring of the invariant part of the user's preferences. There are two types of profiles: the statistical profile and the inferred profile. For the construction and updating of this profile, several approaches and techniques can be used. The personalization of information can appear in two modes: in query or recommendation, and the systems using it can be divided into three groups: recommendation systems, contextual access systems and custom meta-search engines. In addition, it can be used in various technological fields such as: information retrieval, databases and human-machine interfaces. In this work we carry out a comparative study of the mechanisms for personalization of information.
{"title":"Comparative study of the mechanisms for personalization of information","authors":"Moustapha Cissé, Marie Ndiaye, Khalifa Gaye, O. Sall, Mouhamadou Saliou Diallo","doi":"10.1145/3361570.3361610","DOIUrl":"https://doi.org/10.1145/3361570.3361610","url":null,"abstract":"The relevance of the information provided, its intelligibility and its adaptation to customer usage and preferences are key factors in the rejection or success of an information system. Thus, in a context of information superabundance and heterogeneity, the implementation of mechanisms for personalizing information has become essential. Personalizing information means adapting it to In most information personalization mechanisms, there is agreement on the usefulness of having a user profile. The latter can be seen as a factoring of the invariant part of the user's preferences. There are two types of profiles: the statistical profile and the inferred profile. For the construction and updating of this profile, several approaches and techniques can be used. The personalization of information can appear in two modes: in query or recommendation, and the systems using it can be divided into three groups: recommendation systems, contextual access systems and custom meta-search engines. In addition, it can be used in various technological fields such as: information retrieval, databases and human-machine interfaces. In this work we carry out a comparative study of the mechanisms for personalization of information.","PeriodicalId":414028,"journal":{"name":"Proceedings of the 9th International Conference on Information Systems and Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124526681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}