The Hipster Effect is a group of evolutionary ‘‘Diffusive Learning’’ processes of networks of individuals and groups (and their communication devices) that form Cyber-Physical Systems; and the Hipster Effect theory has potential applications in many fields of research. This study addresses decision-making parameters in machine-learning algorithms, and more specifically, critiques the explanations for the Hipster Effect, and discusses the implications for portfolio management and corporate bankruptcy prediction (two areas where AI has been used extensively). The methodological approach in this study is entirely theoretical analysis. The main findings are as follows: (i) the Hipster Effect theory and associated mathematical models are flawed; (ii) some decision-making and learning models in machine-learning algorithms are flawed; (iii) but regardless of whether or not the Hipster Effect theory is correct, it can be used to develop portfolio management models, some of which are summarised herein; (iv) the [1] corporate bankruptcy prediction model can also be used for portfolio-selection (stocks and bonds).
{"title":"Complex systems and ‘‘Spatio -Temporal Anti-Compliance Coordination’’ In cyber-physical networks: A critique of the Hipster Effect, bankruptcy prediction and alternative risk premia","authors":"Michael I. C. Nwogugu","doi":"10.1049/ccs2.12029","DOIUrl":"10.1049/ccs2.12029","url":null,"abstract":"<p>The <i>Hipster Effect</i> is a group of evolutionary ‘‘<i>Diffusive Learning</i>’’ processes of networks of individuals and groups (and their communication devices) that form <i>Cyber-Physical Systems</i>; and the <i>Hipster Effect</i> theory has potential applications in many fields of research. This study addresses decision-making parameters in machine-learning algorithms, and more specifically, critiques the explanations for the <i>Hipster Effect</i>, and discusses the implications for portfolio management and corporate bankruptcy prediction (two areas where AI has been used extensively). The methodological approach in this study is entirely theoretical analysis. The main findings are as follows: (i) the <i>Hipster Effect</i> theory and associated mathematical models are flawed; (ii) some decision-making and learning models in machine-learning algorithms are flawed; (iii) but regardless of whether or not the <i>Hipster Effect</i> theory is correct, it can be used to develop portfolio management models, some of which are summarised herein; (iv) the [1] corporate bankruptcy prediction model can also be used for portfolio-selection (stocks and bonds).</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121550565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The extreme learning machine-based autoencoder (ELM-AE) has attracted a lot of attention due to its fast learning speed and promising representation capability. However, the existing ELM-AE algorithms only reconstruct the original input and generally ignore the probability distribution of the data. The minimum error entropy (MEE), as an optimal criterion considering the distribution statistics of the data, is robust in handling non-linear systems and non-Gaussian noises. The MEE is equivalent to the minimisation of the Kullback–Leibaler divergence. Inspired by these advantages, a novel randomised AE is proposed by adopting the MEE criterion as the loss function in the ELM-AE (in short, the MEE-RAE) in this study. Instead of solving the output weight by the Moore–Penrose generalised inverse, the optimal output weight is obtained by the fixed-point iteration method. Further, a quantised MEE (QMEE) is applied to reduce the computational complexity of. Simulations have shown that the QMEE-RAE not only achieves superior generalisation performance but is also more robust to non-Gaussian noises than the ELM-AE.
{"title":"Minimum error entropy criterion-based randomised autoencoder","authors":"Rongzhi Ma, Tianlei Wang, Jiuwen Cao, Fang Dong","doi":"10.1049/ccs2.12030","DOIUrl":"10.1049/ccs2.12030","url":null,"abstract":"<p>The extreme learning machine-based autoencoder (ELM-AE) has attracted a lot of attention due to its fast learning speed and promising representation capability. However, the existing ELM-AE algorithms only reconstruct the original input and generally ignore the probability distribution of the data. The minimum error entropy (MEE), as an optimal criterion considering the distribution statistics of the data, is robust in handling non-linear systems and non-Gaussian noises. The MEE is equivalent to the minimisation of the Kullback–Leibaler divergence. Inspired by these advantages, a novel randomised AE is proposed by adopting the MEE criterion as the loss function in the ELM-AE (in short, the MEE-RAE) in this study. Instead of solving the output weight by the Moore–Penrose generalised inverse, the optimal output weight is obtained by the fixed-point iteration method. Further, a quantised MEE (QMEE) is applied to reduce the computational complexity of. Simulations have shown that the QMEE-RAE not only achieves superior generalisation performance but is also more robust to non-Gaussian noises than the ELM-AE.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127446354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the aim to identify new fault diagnosis and advanced robotic systems, this paper first proposes a fault diagnosis algorithm based on an artificial immune network model that can adjust the pruning threshold. Secondly, the algorithm is improved based on neighbourhood rough set theory, in which the relationships among the pruning threshold, misdiagnosis rate, and missed diagnosis rate in the shape space are discussed. In addition, an improved algorithm for adjusting the adaptively pruning threshold based solely on an observation index is described. The simulation experiments show that the algorithm can identify the new fault modes while keeping the misdiagnosis and missed diagnosis rates low.
{"title":"Improved fault diagnosis algorithm based on artificial immune network model and neighbourhood rough set theory","authors":"Yonghuang Zheng, Benhong Li, Shangmin Zhang","doi":"10.1049/ccs2.12026","DOIUrl":"10.1049/ccs2.12026","url":null,"abstract":"<p>With the aim to identify new fault diagnosis and advanced robotic systems, this paper first proposes a fault diagnosis algorithm based on an artificial immune network model that can adjust the pruning threshold. Secondly, the algorithm is improved based on neighbourhood rough set theory, in which the relationships among the pruning threshold, misdiagnosis rate, and missed diagnosis rate in the shape space are discussed. In addition, an improved algorithm for adjusting the adaptively pruning threshold based solely on an observation index is described. The simulation experiments show that the algorithm can identify the new fault modes while keeping the misdiagnosis and missed diagnosis rates low.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117136549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The creation of artificial moral systems requires making difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here the authors' argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when progress beyond prevailing moral norms is particularly urgent, as is currently the case due to the inadequacy of prevailing moral norms in the face of the climate and ecological crisis.
{"title":"Machine morality, moral progress, and the looming environmental disaster","authors":"Ben Kenward, Thomas Sinclair","doi":"10.1049/ccs2.12027","DOIUrl":"10.1049/ccs2.12027","url":null,"abstract":"<p>The creation of artificial moral systems requires making difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here the authors' argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when progress beyond prevailing moral norms is particularly urgent, as is currently the case due to the inadequacy of prevailing moral norms in the face of the climate and ecological crisis.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126659587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nigel Crook, Selin Nugent, Matthias Rolf, Adam Baimel, Rebecca Raper
We find ourselves at a unique point of time in history. Following over two millennia of debate amongst some of the greatest minds that ever existed about the nature of morality, the philosophy of ethics and the attributes of moral agency, and after all that time still not having reached consensus, we are coming to a point where artificial intelligence (AI) technology is enabling the creation of machines that will possess a convincing degree of moral competence. The existence of these machines will undoubtedly have an impact on this age old debate, but we believe that they will have a greater impact on society at large, as AI technology deepens its integration into the social fabric of our world. The purpose of this special issue on Computing Morality is to bring together different perspectives on this technology and its impact on society. The special issue contains four very different and inspiring contributions.
{"title":"Computing morality: Synthetic ethical decision making and behaviour","authors":"Nigel Crook, Selin Nugent, Matthias Rolf, Adam Baimel, Rebecca Raper","doi":"10.1049/ccs2.12028","DOIUrl":"10.1049/ccs2.12028","url":null,"abstract":"<p>We find ourselves at a unique point of time in history. Following over two millennia of debate amongst some of the greatest minds that ever existed about the nature of morality, the philosophy of ethics and the attributes of moral agency, and after all that time still not having reached consensus, we are coming to a point where artificial intelligence (AI) technology is enabling the creation of machines that will possess a convincing degree of moral competence. The existence of these machines will undoubtedly have an impact on this age old debate, but we believe that they will have a greater impact on society at large, as AI technology deepens its integration into the social fabric of our world. The purpose of this special issue on Computing Morality is to bring together different perspectives on this technology and its impact on society. The special issue contains four very different and inspiring contributions.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127176333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A conventional robot programming method extensively limits the reusability of skills in the developmental aspect. Engineers programme a robot in a targeted manner for the realisation of predefined skills. The low reusability of general-purpose robot skills is mainly reflected in inability in novel and complex scenarios. Skill transfer aims to transfer human skills to general-purpose manipulators or mobile robots to replicate human-like behaviours. Skill transfer methods that are commonly used at present, such as learning from demonstrated (LfD) or imitation learning, endow the robot with the expert's low-level motor and high-level decision-making ability, so that skills can be reproduced and generalised according to perceived context. The improvement of robot cognition usually relates to an improvement in the autonomous high-level decision-making ability. Based on the idea of establishing a generic or specialised robot skill library, robots are expected to autonomously reason about the needs for using skills and plan compound movements according to sensory input. In recent years, in this area, many successful studies have demonstrated their effectiveness. Herein, a detailed review is provided on the transferring techniques of skills, applications, advancements, and limitations, especially in the LfD. Future research directions are also suggested.
{"title":"Review of the techniques used in motor-cognitive human-robot skill transfer","authors":"Yuan Guan, Ning Wang, Chenguang Yang","doi":"10.1049/ccs2.12025","DOIUrl":"10.1049/ccs2.12025","url":null,"abstract":"<p>A conventional robot programming method extensively limits the reusability of skills in the developmental aspect. Engineers programme a robot in a targeted manner for the realisation of predefined skills. The low reusability of general-purpose robot skills is mainly reflected in inability in novel and complex scenarios. Skill transfer aims to transfer human skills to general-purpose manipulators or mobile robots to replicate human-like behaviours. Skill transfer methods that are commonly used at present, such as learning from demonstrated (LfD) or imitation learning, endow the robot with the expert's low-level motor and high-level decision-making ability, so that skills can be reproduced and generalised according to perceived context. The improvement of robot cognition usually relates to an improvement in the autonomous high-level decision-making ability. Based on the idea of establishing a generic or specialised robot skill library, robots are expected to autonomously reason about the needs for using skills and plan compound movements according to sensory input. In recent years, in this area, many successful studies have demonstrated their effectiveness. Herein, a detailed review is provided on the transferring techniques of skills, applications, advancements, and limitations, especially in the LfD. Future research directions are also suggested.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117215494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VirtuosA (‘virtuous algorithm’) is introduced, a model in which artificial intelligence (AI) systems learn ethical behaviour based on a framework adapted from Christian philosopher Dallas Willard and brought together with associated neurobiological structures and broader systems thinking. To make the inquiry concrete, the authors present a simple example scenario that illustrates how a robot might acquire behaviour akin to the virtue of kindness that can be attributed to humans. References to philosophical work by Peter Sloterdijk help contextualise Willard’s virtue ethics framework. The VirtuosA architecture can be implemented using state-of-the-art computing practices and plausibly redescribes several concrete scenarios implemented from the computing literature and exhibits broad coverage relative to other work in ethical AI. Strategies are described for using the model for systems evaluation —particularly the role of ‘embedded evaluation’ within the system—and its broader application as a meta-ethical device is discussed.
{"title":"The Anatomy of moral agency: A theological and neuroscience inspired model of virtue ethics","authors":"Nigel Crook, Joseph Corneli","doi":"10.1049/ccs2.12024","DOIUrl":"10.1049/ccs2.12024","url":null,"abstract":"<p>VirtuosA (‘virtuous algorithm’) is introduced, a model in which artificial intelligence (AI) systems learn ethical behaviour based on a framework adapted from Christian philosopher Dallas Willard and brought together with associated neurobiological structures and broader systems thinking. To make the inquiry concrete, the authors present a simple example scenario that illustrates how a robot might acquire behaviour akin to the virtue of kindness that can be attributed to humans. References to philosophical work by Peter Sloterdijk help contextualise Willard’s virtue ethics framework. The VirtuosA architecture can be implemented using state-of-the-art computing practices and plausibly redescribes several concrete scenarios implemented from the computing literature and exhibits broad coverage relative to other work in ethical AI. Strategies are described for using the model for systems evaluation —particularly the role of ‘embedded evaluation’ within the system—and its broader application as a meta-ethical device is discussed.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121286558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speech enhancement is an important preprocessing step in a wide diversity of practical fields related to speech signals, and many signal-processing methods have already been proposed for speech enhancement. However, the lack of a comprehensive and quantitative evaluation of enhancement performance for multi-speech makes it difficult to choose an appropriate enhancement method for a multi-speech application. This work aims to study the implementation of several enhancement methods for multi-speech enhancement in indoor environments of T60 = 0 s and T60 = 0.3 s. Two types of enhancement approaches are proposed and compared. The first type is the basic enhancement methods, including delay-and-sum beamforming (DSB), minimum variance distortionless response (MVDR), linearly constrained minimum variance (LCMV), and independent component analysis (ICA). The second type is the robust enhancement methods, including improved MVDR and LCMV realized by eigendecomposition and diagonal loading. In addition, online enhancement performance based on the iteration of single-frame speech signals is researched, as is the comprehensive performance of various enhancement methods. The experimental results show that the enhancement effects of LCMV and ICA are relatively more stable in the case of basic enhancement methods; in the case of the improved enhancement algorithms, methods that employ diagonal loading iterations show better performance. In terms of online enhancement, DSB with frequency masking (FM) yields the best performance on the signal-to-interference ratio (SIR) and can suppress interference. The comprehensive performance test showed that LCMV and ICA yielded the best effects when there was no reverberation, while DSB with FM yielded the best SIR value when reverberation was present.
{"title":"Exploring conventional enhancement and separation methods for multi-speech enhancement in indoor environments","authors":"Yangjie Wei, Ke Zhang, Dan Wu, Zhongqi Hu","doi":"10.1049/ccs2.12023","DOIUrl":"10.1049/ccs2.12023","url":null,"abstract":"<p>Speech enhancement is an important preprocessing step in a wide diversity of practical fields related to speech signals, and many signal-processing methods have already been proposed for speech enhancement. However, the lack of a comprehensive and quantitative evaluation of enhancement performance for multi-speech makes it difficult to choose an appropriate enhancement method for a multi-speech application. This work aims to study the implementation of several enhancement methods for multi-speech enhancement in indoor environments of T60 = 0 s and T60 = 0.3 s. Two types of enhancement approaches are proposed and compared. The first type is the basic enhancement methods, including delay-and-sum beamforming (DSB), minimum variance distortionless response (MVDR), linearly constrained minimum variance (LCMV), and independent component analysis (ICA). The second type is the robust enhancement methods, including improved MVDR and LCMV realized by eigendecomposition and diagonal loading. In addition, online enhancement performance based on the iteration of single-frame speech signals is researched, as is the comprehensive performance of various enhancement methods. The experimental results show that the enhancement effects of LCMV and ICA are relatively more stable in the case of basic enhancement methods; in the case of the improved enhancement algorithms, methods that employ diagonal loading iterations show better performance. In terms of online enhancement, DSB with frequency masking (FM) yields the best performance on the signal-to-interference ratio (SIR) and can suppress interference. The comprehensive performance test showed that LCMV and ICA yielded the best effects when there was no reverberation, while DSB with FM yielded the best SIR value when reverberation was present.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132352157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional neurosurgical craniotomy primarily uses two-dimensional cranial medical images to estimate the location of a patient’s intracranial lesions. Such work relies on the experience and skills of the doctor and may result in accidental injury to important intracranial physiological tissues. To help doctors more intuitively determine patient lesion information and improve the accuracy of surgical route formulation and craniotomy safety, an augmented reality method for displaying neurosurgery craniotomy lesions based on feature contour matching is proposed. This method uses threshold segmentation and region growing algorithms to reconstruct a 3-D Computed tomography image of the patient’s head. The augmented reality engine is used to adjust the reconstruction model’s relevant parameters to meet the doctor’s requirements and determine the augmented reality matching method for feature contour matching. By using the mobile terminal to align the real skull model, the virtual lesion model is displayed. Using the designed user interface, doctors can view the patient’s personal information and can zoom in, zoom out, and rotate the virtual model. Therefore, the patient’s lesions information can be visualized accurately, which provides a visual basis for preoperative preparation.
{"title":"Augmented reality display of neurosurgery craniotomy lesions based on feature contour matching","authors":"Hao Zhang, Qi-Yuan Sun, Zhen-Zhong Liu","doi":"10.1049/ccs2.12021","DOIUrl":"10.1049/ccs2.12021","url":null,"abstract":"<p>Traditional neurosurgical craniotomy primarily uses two-dimensional cranial medical images to estimate the location of a patient’s intracranial lesions. Such work relies on the experience and skills of the doctor and may result in accidental injury to important intracranial physiological tissues. To help doctors more intuitively determine patient lesion information and improve the accuracy of surgical route formulation and craniotomy safety, an augmented reality method for displaying neurosurgery craniotomy lesions based on feature contour matching is proposed. This method uses threshold segmentation and region growing algorithms to reconstruct a 3-D Computed tomography image of the patient’s head. The augmented reality engine is used to adjust the reconstruction model’s relevant parameters to meet the doctor’s requirements and determine the augmented reality matching method for feature contour matching. By using the mobile terminal to align the real skull model, the virtual lesion model is displayed. Using the designed user interface, doctors can view the patient’s personal information and can zoom in, zoom out, and rotate the virtual model. Therefore, the patient’s lesions information can be visualized accurately, which provides a visual basis for preoperative preparation.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129447576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dipayan Saha, S.M.Mahbubur Rahman, Mohammad Tariqul Islam, M. Omair Ahmad, M.N.S. Swamy
The degree to which advertisements are successful is of prime concern for vendors in highly competitive global markets. Given the astounding growth of multimedia content on the internet, online marketing has become another form of advertising. Researchers consider advertisement likeability a major predictor of effective market penetration. An algorithm is presented to predict how much an advertisement clip will be liked with the aid of an end-to-end audiovisual feature extraction process using cognitive computing technology. Specifically, the usefulness of different spatial and time-domain deep-learning architectures such as convolutional neural and long short-term memory networks is investigated to predict the frame-by-frame instantaneous and root mean square likeability of advertisement clips. A data set named the ‘BUET Advertisement Likeness Data Set’, containing annotations of frame-wise likeability scores for various categories of advertisements, is also introduced. Experiments with the developed database show that the proposed algorithm performs better than existing methods in terms of commonly used performance indices at the expense of slightly increased computational complexity.
{"title":"Prediction of instantaneous likeability of advertisements using deep learning","authors":"Dipayan Saha, S.M.Mahbubur Rahman, Mohammad Tariqul Islam, M. Omair Ahmad, M.N.S. Swamy","doi":"10.1049/ccs2.12022","DOIUrl":"10.1049/ccs2.12022","url":null,"abstract":"<p>The degree to which advertisements are successful is of prime concern for vendors in highly competitive global markets. Given the astounding growth of multimedia content on the internet, online marketing has become another form of advertising. Researchers consider advertisement likeability a major predictor of effective market penetration. An algorithm is presented to predict how much an advertisement clip will be liked with the aid of an end-to-end audiovisual feature extraction process using cognitive computing technology. Specifically, the usefulness of different spatial and time-domain deep-learning architectures such as convolutional neural and long short-term memory networks is investigated to predict the frame-by-frame instantaneous and root mean square likeability of advertisement clips. A data set named the ‘BUET Advertisement Likeness Data Set’, containing annotations of frame-wise likeability scores for various categories of advertisements, is also introduced. Experiments with the developed database show that the proposed algorithm performs better than existing methods in terms of commonly used performance indices at the expense of slightly increased computational complexity.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122800371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}