{"title":"PReLU:XOR 问题的另一种单层解决方案","authors":"Rafael C. Pinto, Anderson R. Tavares","doi":"arxiv-2409.10821","DOIUrl":null,"url":null,"abstract":"This paper demonstrates that a single-layer neural network using Parametric\nRectified Linear Unit (PReLU) activation can solve the XOR problem, a simple\nfact that has been overlooked so far. We compare this solution to the\nmulti-layer perceptron (MLP) and the Growing Cosine Unit (GCU) activation\nfunction and explain why PReLU enables this capability. Our results show that\nthe single-layer PReLU network can achieve 100\\% success rate in a wider range\nof learning rates while using only three learnable parameters.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PReLU: Yet Another Single-Layer Solution to the XOR Problem\",\"authors\":\"Rafael C. Pinto, Anderson R. Tavares\",\"doi\":\"arxiv-2409.10821\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper demonstrates that a single-layer neural network using Parametric\\nRectified Linear Unit (PReLU) activation can solve the XOR problem, a simple\\nfact that has been overlooked so far. We compare this solution to the\\nmulti-layer perceptron (MLP) and the Growing Cosine Unit (GCU) activation\\nfunction and explain why PReLU enables this capability. Our results show that\\nthe single-layer PReLU network can achieve 100\\\\% success rate in a wider range\\nof learning rates while using only three learnable parameters.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"38 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10821\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10821","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PReLU: Yet Another Single-Layer Solution to the XOR Problem
This paper demonstrates that a single-layer neural network using Parametric
Rectified Linear Unit (PReLU) activation can solve the XOR problem, a simple
fact that has been overlooked so far. We compare this solution to the
multi-layer perceptron (MLP) and the Growing Cosine Unit (GCU) activation
function and explain why PReLU enables this capability. Our results show that
the single-layer PReLU network can achieve 100\% success rate in a wider range
of learning rates while using only three learnable parameters.