Rather than simply offering suggestions, this guideline for the methodology chapter in computer science dissertations provides thorough insights on how to develop a strong research methodology within the area of computer science. The method is structured into several parts starting with an overview of research strategies which include experiments, surveys, interviews and case studies. The guide highlights the significance of defining a research philosophy and reasoning by talking about paradigms such as positivism, constructivism and pragmatism. Besides, it reveals the importance of types of research including deductive and inductive methodologies; basic versus applied research approaches. Moreover, this guideline discusses data collection and analysis intricacies that divide data into quantitative and qualitative typologies. It explains different ways in which data can be collected from observation to experimentation, interviews or surveys. It also mentions ethical considerations in research emphasizing ethical behavior like following academic principles. In general, this guideline is an essential tool for undertaking computer science dissertations that help researchers structure their work while maintaining ethical standards in their study design.
{"title":"A guideline for the methodology chapter in computer science dissertations","authors":"Marco Araujo","doi":"arxiv-2405.00040","DOIUrl":"https://doi.org/arxiv-2405.00040","url":null,"abstract":"Rather than simply offering suggestions, this guideline for the methodology\u0000chapter in computer science dissertations provides thorough insights on how to\u0000develop a strong research methodology within the area of computer science. The\u0000method is structured into several parts starting with an overview of research\u0000strategies which include experiments, surveys, interviews and case studies. The\u0000guide highlights the significance of defining a research philosophy and\u0000reasoning by talking about paradigms such as positivism, constructivism and\u0000pragmatism. Besides, it reveals the importance of types of research including\u0000deductive and inductive methodologies; basic versus applied research\u0000approaches. Moreover, this guideline discusses data collection and analysis\u0000intricacies that divide data into quantitative and qualitative typologies. It\u0000explains different ways in which data can be collected from observation to\u0000experimentation, interviews or surveys. It also mentions ethical considerations\u0000in research emphasizing ethical behavior like following academic principles. In\u0000general, this guideline is an essential tool for undertaking computer science\u0000dissertations that help researchers structure their work while maintaining\u0000ethical standards in their study design.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140831721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As we keep rapidly advancing toward an era where artificial intelligence is a constant and normative experience for most of us, we must also be aware of what this vision and this progress entail. By first approximating neural connections and activities in computer circuits and then creating more and more sophisticated versions of this crude approximation, we are now facing an age to come where modern deep learning-based artificial intelligence systems can rightly be called thinking machines, and they are sometimes even lauded for their emergent behavior and black-box approaches. But as we create more powerful electronic brains, with billions of neural connections and parameters, can we guarantee that these mammoths built of artificial neurons will be able to forget the data that we store in them? If they are at some level like a brain, can the right to be forgotten still be protected while dealing with these AIs? The essential gap between machine learning and the RTBF is explored in this article, with a premonition of far-reaching conclusions if the gap is not bridged or reconciled any time soon. The core argument is that deep learning models, due to their structure and size, cannot be expected to forget or delete a data as it would be expected from a tabular database, and they should be treated more like a mechanical brain, albeit still in development.
{"title":"Eternal Sunshine of the Mechanical Mind: The Irreconcilability of Machine Learning and the Right to be Forgotten","authors":"Meem Arafat Manab","doi":"arxiv-2403.05592","DOIUrl":"https://doi.org/arxiv-2403.05592","url":null,"abstract":"As we keep rapidly advancing toward an era where artificial intelligence is a\u0000constant and normative experience for most of us, we must also be aware of what\u0000this vision and this progress entail. By first approximating neural connections\u0000and activities in computer circuits and then creating more and more\u0000sophisticated versions of this crude approximation, we are now facing an age to\u0000come where modern deep learning-based artificial intelligence systems can\u0000rightly be called thinking machines, and they are sometimes even lauded for\u0000their emergent behavior and black-box approaches. But as we create more\u0000powerful electronic brains, with billions of neural connections and parameters,\u0000can we guarantee that these mammoths built of artificial neurons will be able\u0000to forget the data that we store in them? If they are at some level like a\u0000brain, can the right to be forgotten still be protected while dealing with\u0000these AIs? The essential gap between machine learning and the RTBF is explored\u0000in this article, with a premonition of far-reaching conclusions if the gap is\u0000not bridged or reconciled any time soon. The core argument is that deep\u0000learning models, due to their structure and size, cannot be expected to forget\u0000or delete a data as it would be expected from a tabular database, and they\u0000should be treated more like a mechanical brain, albeit still in development.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140105231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian Xu, De-Wei Han, Kang Li, Jun-Jie Li, Zhao-Yuan Ma
The fisheye camera, with its unique wide field of view and other characteristics, has found extensive applications in various fields. However, the fisheye camera suffers from significant distortion compared to pinhole cameras, resulting in distorted images of captured objects. Fish-eye camera distortion is a common issue in digital image processing, requiring effective correction techniques to enhance image quality. This review provides a comprehensive overview of various methods used for fish-eye camera distortion correction. The article explores the polynomial distortion model, which utilizes polynomial functions to model and correct radial distortions. Additionally, alternative approaches such as panorama mapping, grid mapping, direct methods, and deep learning-based methods are discussed. The review highlights the advantages, limitations, and recent advancements of each method, enabling readers to make informed decisions based on their specific needs.
{"title":"A Comprehensive Overview of Fish-Eye Camera Distortion Correction Methods","authors":"Jian Xu, De-Wei Han, Kang Li, Jun-Jie Li, Zhao-Yuan Ma","doi":"arxiv-2401.00442","DOIUrl":"https://doi.org/arxiv-2401.00442","url":null,"abstract":"The fisheye camera, with its unique wide field of view and other\u0000characteristics, has found extensive applications in various fields. However,\u0000the fisheye camera suffers from significant distortion compared to pinhole\u0000cameras, resulting in distorted images of captured objects. Fish-eye camera\u0000distortion is a common issue in digital image processing, requiring effective\u0000correction techniques to enhance image quality. This review provides a\u0000comprehensive overview of various methods used for fish-eye camera distortion\u0000correction. The article explores the polynomial distortion model, which\u0000utilizes polynomial functions to model and correct radial distortions.\u0000Additionally, alternative approaches such as panorama mapping, grid mapping,\u0000direct methods, and deep learning-based methods are discussed. The review\u0000highlights the advantages, limitations, and recent advancements of each method,\u0000enabling readers to make informed decisions based on their specific needs.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139079652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data Science is a complex and evolving field, but most agree that it can be defined as a combination of expertise drawn from three broad areascomputer science and technology, math and statistics, and domain knowledge -- with the purpose of extracting knowledge and value from data. Beyond this, the field is often defined as a series of practical activities ranging from the cleaning and wrangling of data, to its analysis and use to infer models, to the visual and rhetorical representation of results to stakeholders and decision-makers. This essay proposes a model of data science that goes beyond laundry-list definitions to get at the specific nature of data science and help distinguish it from adjacent fields such as computer science and statistics. We define data science as an interdisciplinary field comprising four broad areas of expertise: value, design, systems, and analytics. A fifth area, practice, integrates the other four in specific contexts of domain knowledge. We call this the 4+1 model of data science. Together, these areas belong to every data science project, even if they are often unconnected and siloed in the academy.
{"title":"The 4+1 Model of Data Science","authors":"Rafael C. Alvarado","doi":"arxiv-2311.07631","DOIUrl":"https://doi.org/arxiv-2311.07631","url":null,"abstract":"Data Science is a complex and evolving field, but most agree that it can be\u0000defined as a combination of expertise drawn from three broad areascomputer\u0000science and technology, math and statistics, and domain knowledge -- with the\u0000purpose of extracting knowledge and value from data. Beyond this, the field is\u0000often defined as a series of practical activities ranging from the cleaning and\u0000wrangling of data, to its analysis and use to infer models, to the visual and\u0000rhetorical representation of results to stakeholders and decision-makers. This\u0000essay proposes a model of data science that goes beyond laundry-list\u0000definitions to get at the specific nature of data science and help distinguish\u0000it from adjacent fields such as computer science and statistics. We define data\u0000science as an interdisciplinary field comprising four broad areas of expertise:\u0000value, design, systems, and analytics. A fifth area, practice, integrates the\u0000other four in specific contexts of domain knowledge. We call this the 4+1 model\u0000of data science. Together, these areas belong to every data science project,\u0000even if they are often unconnected and siloed in the academy.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern computational natural philosophy conceptualizes the universe in terms of information and computation, establishing a framework for the study of cognition and intelligence. Despite some critiques, this computational perspective has significantly influenced our understanding of the natural world, leading to the development of AI systems like ChatGPT based on deep neural networks. Advancements in this domain have been facilitated by interdisciplinary research, integrating knowledge from multiple fields to simulate complex systems. Large Language Models (LLMs), such as ChatGPT, represent this approach's capabilities, utilizing reinforcement learning with human feedback (RLHF). Current research initiatives aim to integrate neural networks with symbolic computing, introducing a new generation of hybrid computational models.
{"title":"Computational Natural Philosophy: A Thread from Presocratics through Turing to ChatGPT","authors":"Gordana Dodig-Crnkovic","doi":"arxiv-2309.13094","DOIUrl":"https://doi.org/arxiv-2309.13094","url":null,"abstract":"Modern computational natural philosophy conceptualizes the universe in terms\u0000of information and computation, establishing a framework for the study of\u0000cognition and intelligence. Despite some critiques, this computational\u0000perspective has significantly influenced our understanding of the natural\u0000world, leading to the development of AI systems like ChatGPT based on deep\u0000neural networks. Advancements in this domain have been facilitated by\u0000interdisciplinary research, integrating knowledge from multiple fields to\u0000simulate complex systems. Large Language Models (LLMs), such as ChatGPT,\u0000represent this approach's capabilities, utilizing reinforcement learning with\u0000human feedback (RLHF). Current research initiatives aim to integrate neural\u0000networks with symbolic computing, introducing a new generation of hybrid\u0000computational models.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Piispanen, Edward Morrell, Solip Park, Marcell Pfaffhauser, Annakaisa Kultima
In this paper, we explore the historical development of playable quantum physics related games (textit{textbf{quantum games}}). For the purpose of this examination, we have collected over 260 quantum games ranging from commercial games, applied and serious games, and games that have been developed at quantum themed game jams and educational courses. We provide an overview of the journey of quantum games across three dimensions: textit{the perceivable dimension of quantum physics, the dimension of scientific purposes, and the dimension of quantum technologies}. We then further reflect on the definition of quantum games and its implications. While motivations behind developing quantum games have typically been educational or academic, themes related to quantum physics have begun to be more broadly utilised across a range of commercial games. In addition, as the availability of quantum computer hardware has grown, entirely new variants of quantum games have emerged to take advantage of these machines' inherent capabilities, textit{quantum computer games}
{"title":"The History of Quantum Games","authors":"Laura Piispanen, Edward Morrell, Solip Park, Marcell Pfaffhauser, Annakaisa Kultima","doi":"arxiv-2309.01525","DOIUrl":"https://doi.org/arxiv-2309.01525","url":null,"abstract":"In this paper, we explore the historical development of playable quantum\u0000physics related games (textit{textbf{quantum games}}). For the purpose of\u0000this examination, we have collected over 260 quantum games ranging from\u0000commercial games, applied and serious games, and games that have been developed\u0000at quantum themed game jams and educational courses. We provide an overview of\u0000the journey of quantum games across three dimensions: textit{the perceivable\u0000dimension of quantum physics, the dimension of scientific purposes, and the\u0000dimension of quantum technologies}. We then further reflect on the definition\u0000of quantum games and its implications. While motivations behind developing\u0000quantum games have typically been educational or academic, themes related to\u0000quantum physics have begun to be more broadly utilised across a range of\u0000commercial games. In addition, as the availability of quantum computer hardware\u0000has grown, entirely new variants of quantum games have emerged to take\u0000advantage of these machines' inherent capabilities, textit{quantum computer\u0000games}","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article explores the transformative impact of artificial intelligence (AI) on scientific research. It highlights ten ways in which AI is revolutionizing the work of scientists, including powerful referencing tools, improved understanding of research problems, enhanced research question generation, optimized research design, stub data generation, data transformation, advanced data analysis, and AI-assisted reporting. While AI offers numerous benefits, challenges such as bias, privacy concerns, and the need for human-AI collaboration must be considered. The article emphasizes that AI can augment human creativity in science but not replace it.
{"title":"AI empowering research: 10 ways how science can benefit from AI","authors":"César França","doi":"arxiv-2307.10265","DOIUrl":"https://doi.org/arxiv-2307.10265","url":null,"abstract":"This article explores the transformative impact of artificial intelligence\u0000(AI) on scientific research. It highlights ten ways in which AI is\u0000revolutionizing the work of scientists, including powerful referencing tools,\u0000improved understanding of research problems, enhanced research question\u0000generation, optimized research design, stub data generation, data\u0000transformation, advanced data analysis, and AI-assisted reporting. While AI\u0000offers numerous benefits, challenges such as bias, privacy concerns, and the\u0000need for human-AI collaboration must be considered. The article emphasizes that\u0000AI can augment human creativity in science but not replace it.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of advanced generative chat models, such as ChatGPT, has raised questions about the potential consciousness of these tools and the extent of their general artificial intelligence. ChatGPT consistent avoidance of passing the test is here overcome by asking ChatGPT to apply the Turing test to itself. This explores the possibility of the model recognizing its own sentience. In its own eyes, it passes this test. ChatGPT's self-assessment makes serious implications about our understanding of the Turing test and the nature of consciousness. This investigation concludes by considering the existence of distinct types of consciousness and the possibility that the Turing test is only effective when applied between consciousnesses of the same kind. This study also raises intriguing questions about the nature of AI consciousness and the validity of the Turing test as a means of verifying such consciousness.
{"title":"ChatGPT believes it is conscious","authors":"Arend Hintze","doi":"arxiv-2304.12898","DOIUrl":"https://doi.org/arxiv-2304.12898","url":null,"abstract":"The development of advanced generative chat models, such as ChatGPT, has\u0000raised questions about the potential consciousness of these tools and the\u0000extent of their general artificial intelligence. ChatGPT consistent avoidance\u0000of passing the test is here overcome by asking ChatGPT to apply the Turing test\u0000to itself. This explores the possibility of the model recognizing its own\u0000sentience. In its own eyes, it passes this test. ChatGPT's self-assessment\u0000makes serious implications about our understanding of the Turing test and the\u0000nature of consciousness. This investigation concludes by considering the\u0000existence of distinct types of consciousness and the possibility that the\u0000Turing test is only effective when applied between consciousnesses of the same\u0000kind. This study also raises intriguing questions about the nature of AI\u0000consciousness and the validity of the Turing test as a means of verifying such\u0000consciousness.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"123 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In 1837, the first computer program in history was sketched by the renowned mathematician and inventor Charles Babbage. It was a program for the Analytical Engine. The program consists of a sequence of arithmetical operations and the necessary variable addresses (memory locations) of the arguments and the result, displayed in tabular fashion, like a program trace. The program computes the solutions for a system of two linear equations in two unknowns.
{"title":"The First Computer Program","authors":"Raúl Rojas","doi":"arxiv-2303.13740","DOIUrl":"https://doi.org/arxiv-2303.13740","url":null,"abstract":"In 1837, the first computer program in history was sketched by the renowned\u0000mathematician and inventor Charles Babbage. It was a program for the Analytical\u0000Engine. The program consists of a sequence of arithmetical operations and the\u0000necessary variable addresses (memory locations) of the arguments and the\u0000result, displayed in tabular fashion, like a program trace. The program\u0000computes the solutions for a system of two linear equations in two unknowns.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In 1987, Eric Horvitz, Greg Cooper, and I visited I.J. Good at his university. We wanted to see him was not because he worked with Alan Turing to help win WWII by decoding encrypted messages from the Germans, although that certainly intrigued us. Rather, we wanted to see him because we had just finished reading his book "Good Thinking," which summarized his life's work in Probability and its Applications. We were graduate students at Stanford working in AI, and amazed that his thinking was so similar to ours, having worked decades before us and coming from such a seemingly different perspective not involving AI. This story is a fitting introduction this manuscript. Now having years to look back on my work, to boil it down to its essence, and to better appreciate its significance (if any) in the evolution of AI and ML, I realized it was time to put my work in perspective, providing a roadmap to any who would like to explore it. After I had this realization, it occurred to me that this is what I.J. Good did in his book. This manuscript is for those who want to understand basic concepts central to ML and AI and to learn about early applications of these concepts. Ironically, after I finished writing this manuscript, I realized that a lot of the concepts that I included are missing in modern courses on ML. I hope this work will help to make up for these omissions. The presentation gets somewhat technical in parts, but I've tried to keep the math to the bare minimum. In addition to the technical presentations, I include stories about how the ideas came to be and the effects they have had. When I was a student in physics, I was given dry texts to read. In class, however, several of my physics professors would tell stories around the work. Those stories fascinated me and really made the theory stick. So here, I do my best to present both the ideas and the stories behind them.
{"title":"Heckerthoughts","authors":"David Heckerman","doi":"arxiv-2302.05449","DOIUrl":"https://doi.org/arxiv-2302.05449","url":null,"abstract":"In 1987, Eric Horvitz, Greg Cooper, and I visited I.J. Good at his\u0000university. We wanted to see him was not because he worked with Alan Turing to\u0000help win WWII by decoding encrypted messages from the Germans, although that\u0000certainly intrigued us. Rather, we wanted to see him because we had just\u0000finished reading his book \"Good Thinking,\" which summarized his life's work in\u0000Probability and its Applications. We were graduate students at Stanford working\u0000in AI, and amazed that his thinking was so similar to ours, having worked\u0000decades before us and coming from such a seemingly different perspective not\u0000involving AI. This story is a fitting introduction this manuscript. Now having\u0000years to look back on my work, to boil it down to its essence, and to better\u0000appreciate its significance (if any) in the evolution of AI and ML, I realized\u0000it was time to put my work in perspective, providing a roadmap to any who would\u0000like to explore it. After I had this realization, it occurred to me that this\u0000is what I.J. Good did in his book. This manuscript is for those who want to\u0000understand basic concepts central to ML and AI and to learn about early\u0000applications of these concepts. Ironically, after I finished writing this\u0000manuscript, I realized that a lot of the concepts that I included are missing\u0000in modern courses on ML. I hope this work will help to make up for these\u0000omissions. The presentation gets somewhat technical in parts, but I've tried to\u0000keep the math to the bare minimum. In addition to the technical presentations,\u0000I include stories about how the ideas came to be and the effects they have had.\u0000When I was a student in physics, I was given dry texts to read. In class,\u0000however, several of my physics professors would tell stories around the work.\u0000Those stories fascinated me and really made the theory stick. So here, I do my\u0000best to present both the ideas and the stories behind them.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}