In an era of a continuous quest for business growth and sustainability it has been shown that synergies and growth-driven mergers and acquisitions (M&As) are an integral part of institutional strategy. In order to endure in the face of fierce competition M&As have become a very important channel of obtaining technology, increasing competitiveness and market share (Carbone & Stone, 2005; Christensen et al., 2011). During the post-2000 era, this is also a point where more than half of the said available growth and synergies in M&As are strongly related to information technology (IT) and its disruptive synergistic potential, as the first decade of the 2000s has shown (Sarrazin & West, 2011). Such business growth materializes at the intersection of internalizing, integrating, and applying the latest data management technology with M&As where there are vast opportunities to develop (a) new technologies, (b) new target screening and valuation methodologies, (c) new products, (d) new services, and (e) new business models (Hacklin et al., 2013; Lee & Lee, 2017). However, while technology and its disruptive capabilities have received considerable attention from the business community in general, studies regarding the examination of technology convergence, integration dynamics, and success of M&As from a market screening and valuation perspective are relatively scarce (Lee & Cho, 2015; Song et al., 2017). Furthermore, little attention has been devoted to investigating the evolutionary path of technology-assisted, target screening methods and understanding their potential for effective target acquisition in the future (Aaldering et al., 2019). We contribute to this by examining the application of neural network (NN) methodology in successful target screening in the US M&As IT sector.
In addition, while there are recognized idiosyncratic motivations for pursuing M&A-centered strategies for growth, there are also considerable system-wide issues that introduce waves of global M&A deals. Examples include reactions to globalization dynamics, changes in competition, tax reforms (such as the recent US tax reform indicating tax benefits for investors), deregulation, economic reforms and liberalization, block or regional economic integration (i.e., the Gulf Cooperation Council and the EU). Hence, effective target-firm identification is an important research topic to business leaders and academics from both management and economic perspectives.
Technology firms in particular often exhibit unconventional growth patterns, and this also makes firm valuation problematic as it can drive their stocks being hugely misvalued (i.e., overvalued) therefore increasing M&A activity (Rhodes-Kropf & Viswanathan, 2004). Betton et al. (2008) claimed that predicting targets with any degr
{"title":"Conventional and neural network target-matching methods dynamics: The information technology mergers and acquisitions market in the USA","authors":"Ioannis Anagnostopoulos, Anas Rizeq","doi":"10.1002/isaf.1492","DOIUrl":"10.1002/isaf.1492","url":null,"abstract":"<p>In an era of a continuous quest for business growth and sustainability it has been shown that synergies and growth-driven mergers and acquisitions (M&As) are an integral part of institutional strategy. In order to endure in the face of fierce competition M&As have become a very important channel of obtaining technology, increasing competitiveness and market share (Carbone & Stone, <span>2005</span>; Christensen et al., <span>2011</span>). During the post-2000 era, this is also a point where more than half of the said available growth and synergies in M&As are strongly related to information technology (IT) and its disruptive synergistic potential, as the first decade of the 2000s has shown (Sarrazin & West, <span>2011</span>). Such business growth materializes at the intersection of internalizing, integrating, and applying the latest data management technology with M&As where there are vast opportunities to develop (a) new technologies, (b) new target screening and valuation methodologies, (c) new products, (d) new services, and (e) new business models (Hacklin et al., <span>2013</span>; Lee & Lee, <span>2017</span>). However, while technology and its disruptive capabilities have received considerable attention from the business community in general, studies regarding the examination of technology convergence, integration dynamics, and success of M&As from a market screening and valuation perspective are relatively scarce (Lee & Cho, <span>2015</span>; Song et al., <span>2017</span>). Furthermore, little attention has been devoted to investigating the evolutionary path of technology-assisted, target screening methods and understanding their potential for effective target acquisition in the future (Aaldering et al., <span>2019</span>). We contribute to this by examining the application of neural network (NN) methodology in successful target screening in the US M&As IT sector.</p><p>In addition, while there are recognized idiosyncratic motivations for pursuing M&A-centered strategies for growth, there are also considerable system-wide issues that introduce waves of global M&A deals. Examples include reactions to globalization dynamics, changes in competition, tax reforms (such as the recent US tax reform indicating tax benefits for investors), deregulation, economic reforms and liberalization, block or regional economic integration (i.e., the Gulf Cooperation Council and the EU). Hence, effective target-firm identification is an important research topic to business leaders and academics from both management and economic perspectives.</p><p>Technology firms in particular often exhibit unconventional growth patterns, and this also makes firm valuation problematic as it can drive their stocks being hugely misvalued (i.e., overvalued) therefore increasing M&A activity (Rhodes-Kropf & Viswanathan, <span>2004</span>). Betton et al. (<span>2008</span>) claimed that predicting targets with any degr","PeriodicalId":53473,"journal":{"name":"Intelligent Systems in Accounting, Finance and Management","volume":"28 2","pages":"97-118"},"PeriodicalIF":0.0,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/isaf.1492","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123533135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, peer-to-peer (P2P) lending has been gaining popularity amongst borrowers and individual investors. This can mainly be attributed to the easy and quick access to loans and the higher possible returns. However, the risk involved in these investments is considerable, and for most investors, being nonprofessionals, this increases the complexity and the importance of investment decisions. In this study, we focus on generating optimal investment decisions to lenders for selecting loans. We treat the loan selection process in P2P lending as a portfolio optimization problem, with the aim being to select a set of loans that provide a required return while minimizing risk. In the process, we use internal rate of return as the measure of return. As the starting point of the model, we use machine-learning algorithms to predict the default probabilities and calculate expected values for the loans based on historical data. Afterwards, we calculate the distance between loans using (i) default probabilities and, as a novel step, (ii) expected value. In the calculations, we utilize kernel functions to obtain similarity weights of loans as the input of the optimization models. Two optimization models are tested and compared on data from the popular P2P platform Lending Club. The results show that using the expected-value framework yields higher return.
{"title":"Data-driven optimization of peer-to-peer lending portfolios based on the expected value framework","authors":"Ajay Byanjankar, József Mezei, Markku Heikkilä","doi":"10.1002/isaf.1490","DOIUrl":"10.1002/isaf.1490","url":null,"abstract":"<p>In recent years, peer-to-peer (P2P) lending has been gaining popularity amongst borrowers and individual investors. This can mainly be attributed to the easy and quick access to loans and the higher possible returns. However, the risk involved in these investments is considerable, and for most investors, being nonprofessionals, this increases the complexity and the importance of investment decisions. In this study, we focus on generating optimal investment decisions to lenders for selecting loans. We treat the loan selection process in P2P lending as a portfolio optimization problem, with the aim being to select a set of loans that provide a required return while minimizing risk. In the process, we use internal rate of return as the measure of return. As the starting point of the model, we use machine-learning algorithms to predict the default probabilities and calculate expected values for the loans based on historical data. Afterwards, we calculate the distance between loans using (i) default probabilities and, as a novel step, (ii) expected value. In the calculations, we utilize kernel functions to obtain similarity weights of loans as the input of the optimization models. Two optimization models are tested and compared on data from the popular P2P platform Lending Club. The results show that using the expected-value framework yields higher return.</p>","PeriodicalId":53473,"journal":{"name":"Intelligent Systems in Accounting, Finance and Management","volume":"28 2","pages":"119-129"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/isaf.1490","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122050250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Volatility is an important element for various financial instruments owing to its ability to measure the risk and reward value of a given financial asset. Owing to its importance, forecasting volatility has become a critical task in financial forecasting. In this paper, we propose a suite of hybrid models for forecasting volatility of crude oil under different forecasting horizons. Specifically, we combine the parameters of generalized autoregressive conditional heteroscedasticity (GARCH) and Glosten–Jagannathan–Runkle (GJR)-GARCH with long short-term memory (LSTM) to create three new forecasting models named GARCH–LSTM, GJR-LSTM, and GARCH-GJRGARCH LSTM in order to forecast crude oil volatility of West Texas Intermediate on different forecasting horizons and compare their performance with the classical volatility forecasting models. Specifically, we compare the performances against existing methodologies of forecasting volatility such as GARCH and found that the proposed hybrid models improve upon the forecasting accuracy of Crude Oil: West Texas Intermediate under various forecasting horizons and perform better than GARCH and GJR-GARCH, with GG–LSTM performing the best of the three proposed models at 7-, 14-, and 21-day-ahead forecasts in terms of heteroscedasticity-adjusted mean square error and heteroscedasticity-adjusted mean absolute error, with significance testing conducted through the model confidence set showing that GG–LSTM is a strong contender for forecasting crude oil volatility under different forecasting regimes and rolling-window schemes. The contribution of the paper is that it enhances the forecasting ability of crude oil futures volatility, which is essential for trading, hedging, and purposes of arbitrage, and that the proposed model dwells upon existing literature and enhances the forecasting accuracy of crude oil volatility by fusing a neural network model with multiple econometric models.
{"title":"Forecasting volatility of crude oil futures using a GARCH–RNN hybrid approach","authors":"Sauraj Verma","doi":"10.1002/isaf.1489","DOIUrl":"10.1002/isaf.1489","url":null,"abstract":"<p>Volatility is an important element for various financial instruments owing to its ability to measure the risk and reward value of a given financial asset. Owing to its importance, forecasting volatility has become a critical task in financial forecasting. In this paper, we propose a suite of hybrid models for forecasting volatility of crude oil under different forecasting horizons. Specifically, we combine the parameters of generalized autoregressive conditional heteroscedasticity (GARCH) and Glosten–Jagannathan–Runkle (GJR)-GARCH with long short-term memory (LSTM) to create three new forecasting models named GARCH–LSTM, GJR-LSTM, and GARCH-GJRGARCH LSTM in order to forecast crude oil volatility of West Texas Intermediate on different forecasting horizons and compare their performance with the classical volatility forecasting models. Specifically, we compare the performances against existing methodologies of forecasting volatility such as GARCH and found that the proposed hybrid models improve upon the forecasting accuracy of Crude Oil: West Texas Intermediate under various forecasting horizons and perform better than GARCH and GJR-GARCH, with GG–LSTM performing the best of the three proposed models at 7-, 14-, and 21-day-ahead forecasts in terms of heteroscedasticity-adjusted mean square error and heteroscedasticity-adjusted mean absolute error, with significance testing conducted through the model confidence set showing that GG–LSTM is a strong contender for forecasting crude oil volatility under different forecasting regimes and rolling-window schemes. The contribution of the paper is that it enhances the forecasting ability of crude oil futures volatility, which is essential for trading, hedging, and purposes of arbitrage, and that the proposed model dwells upon existing literature and enhances the forecasting accuracy of crude oil volatility by fusing a neural network model with multiple econometric models.</p>","PeriodicalId":53473,"journal":{"name":"Intelligent Systems in Accounting, Finance and Management","volume":"28 2","pages":"130-142"},"PeriodicalIF":0.0,"publicationDate":"2021-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/isaf.1489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122598401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}