A set of lines through the origin in a Euclidean space is equiangular if any pair from these lines forms the same angle. The problem to determine the maximum cardinality NðdÞ (d 2 Z 2) of a set of equiangular lines in R dates back to the result of Haantjes [8]. Also, the value NðdÞ is known for every d 17 (see [5, Table 1]). Some lower bounds of the values NðdÞ are given by constructing sets of equiangular lines for larger values of d. We are interested in whether the bounds can be improved, and in particular we will check whether some sets of equiangular lines can be extended. Lin and Yu [9, 10] defined a set X of equiangular lines of rank r to be saturated if there is no line l 62 X such that the union X [ flg is a set of equiangular lines of rank r. Here, the rank of a set of equiangular lines is the smallest dimension of Euclidean spaces into which these lines are isometrically embedded. By using a computer implementing their algorithm [10, p. 274], they verified in [10, Theorem 1 and the end of Sect. 3.2] that seven sets of equiangular lines are saturated. Their algorithm requires the computation of the clique numbers of graphs, which is known to be an NPcomplete problem. We will verify their results by investigating spectra, without a computer. We introduce Seidel matrices in connection with equiangular lines. A Seidel matrix is a symmetric matrix with zero diagonal and all off-diagonal entries 1. Note that if a Seidel matrix S has largest eigenvalue , then there exist vectors whose Gram matrix equals I S, which span a set of equiangular lines with common angle arccosð1= Þ. Cao, Koolen, Munemasa and Yoshino [2] defined a Seidel matrix S with largest eigenvalue to be maximal if there is no Seidel matrix S0 containing S as a proper principal submatrix with largest eigenvalue such that rankð I SÞ 1⁄4 rankð I S0Þ. In other words, the Seidel matrix obtained from a saturated set of equiangular lines is maximal. In this paper, we prove Theorem 1.1, which shows maximality of Seidel matrices with spectra in Table 1, with only the aid of spectra instead of a computer. Specifically, we use Cauchy’s interlacing theorem and the angles of matrices, which are used by Greaves and Yatsyna [6] in order to show some Seidel spectra do not exist. This method enables us to simultaneously verify maximality of some Seidel matrices having common spectra. For example, Szöll} osi and Östergård in [11, Theorem 5.2] showed that there exist, up to switching equivalence, at least 1045 Seidel matrices of order 28 with spectrum f1⁄25 ; 1⁄2 3 ; 1⁄2 7 g. Actually, Theorem 1.1 implies that these Seidel matrices are maximal.
{"title":"Spectral Proofs of Maximality of Some Seidel Matrices","authors":"Kiyoto Yoshino","doi":"10.4036/IIS.2021.S.01","DOIUrl":"https://doi.org/10.4036/IIS.2021.S.01","url":null,"abstract":"A set of lines through the origin in a Euclidean space is equiangular if any pair from these lines forms the same angle. The problem to determine the maximum cardinality NðdÞ (d 2 Z 2) of a set of equiangular lines in R dates back to the result of Haantjes [8]. Also, the value NðdÞ is known for every d 17 (see [5, Table 1]). Some lower bounds of the values NðdÞ are given by constructing sets of equiangular lines for larger values of d. We are interested in whether the bounds can be improved, and in particular we will check whether some sets of equiangular lines can be extended. Lin and Yu [9, 10] defined a set X of equiangular lines of rank r to be saturated if there is no line l 62 X such that the union X [ flg is a set of equiangular lines of rank r. Here, the rank of a set of equiangular lines is the smallest dimension of Euclidean spaces into which these lines are isometrically embedded. By using a computer implementing their algorithm [10, p. 274], they verified in [10, Theorem 1 and the end of Sect. 3.2] that seven sets of equiangular lines are saturated. Their algorithm requires the computation of the clique numbers of graphs, which is known to be an NPcomplete problem. We will verify their results by investigating spectra, without a computer. We introduce Seidel matrices in connection with equiangular lines. A Seidel matrix is a symmetric matrix with zero diagonal and all off-diagonal entries 1. Note that if a Seidel matrix S has largest eigenvalue , then there exist vectors whose Gram matrix equals I S, which span a set of equiangular lines with common angle arccosð1= Þ. Cao, Koolen, Munemasa and Yoshino [2] defined a Seidel matrix S with largest eigenvalue to be maximal if there is no Seidel matrix S0 containing S as a proper principal submatrix with largest eigenvalue such that rankð I SÞ 1⁄4 rankð I S0Þ. In other words, the Seidel matrix obtained from a saturated set of equiangular lines is maximal. In this paper, we prove Theorem 1.1, which shows maximality of Seidel matrices with spectra in Table 1, with only the aid of spectra instead of a computer. Specifically, we use Cauchy’s interlacing theorem and the angles of matrices, which are used by Greaves and Yatsyna [6] in order to show some Seidel spectra do not exist. This method enables us to simultaneously verify maximality of some Seidel matrices having common spectra. For example, Szöll} osi and Östergård in [11, Theorem 5.2] showed that there exist, up to switching equivalence, at least 1045 Seidel matrices of order 28 with spectrum f1⁄25 ; 1⁄2 3 ; 1⁄2 7 g. Actually, Theorem 1.1 implies that these Seidel matrices are maximal.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70252779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-16DOI: 10.20944/preprints202011.0428.v1
T. Sugawa, Li-Mei Wang
We consider the inverse function $z=g(w)$ of a (normalized) starlike function $w=f(z)$ of order $alpha$ on the unit disk of the complex plane with $0
我们考虑了$0
{"title":"On the Fourth Coefficient of the Inverse of a Starlike Function of Positive Order","authors":"T. Sugawa, Li-Mei Wang","doi":"10.20944/preprints202011.0428.v1","DOIUrl":"https://doi.org/10.20944/preprints202011.0428.v1","url":null,"abstract":"We consider the inverse function $z=g(w)$ of a (normalized) starlike function $w=f(z)$ of order $alpha$ on the unit disk of the complex plane with $0<alpha<1.$ Krzy{. z}, Libera and Zl otkiewicz obtained sharp estimates of the second and the third coefficients of $g(w)$ in their 1979 paper. Prokhorov and Szynal gave sharp estimates of the fourth coefficient of $g(w)$ as a consequence of the solution to an extremal problem in 1981. We give a straightforward proof of the estimate of the fourth coefficient of $g(w)$ together with explicit forms of the extremal functions.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45939414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
These lecture notes provide a quick review of basic concepts in statistical analysis and probability theory for data science. We survey general description of singleand multi-variate data, and derive regression models by means of the method of least squares. As theoretical backgrounds we provide basic knowledge of probability theory which is indispensable for further study of mathematical statistics and probability models. We show that the regression line for a multi-variate normal distribution coincides with the regression curve defined through the conditional density function. In Appendix matrix operations are quickly reviewed. These notes are based on the lectures delivered in Graduate Program in Data Science (GP-DS) and Data Sciences Program (DSP) at Tohoku University in 2018–2020.
{"title":"The Elements of Multi-Variate Analysis for Data Science","authors":"M. S. Baladram, N. Obata","doi":"10.4036/iis.2020.a.02","DOIUrl":"https://doi.org/10.4036/iis.2020.a.02","url":null,"abstract":"These lecture notes provide a quick review of basic concepts in statistical analysis and probability theory for data science. We survey general description of singleand multi-variate data, and derive regression models by means of the method of least squares. As theoretical backgrounds we provide basic knowledge of probability theory which is indispensable for further study of mathematical statistics and probability models. We show that the regression line for a multi-variate normal distribution coincides with the regression curve defined through the conditional density function. In Appendix matrix operations are quickly reviewed. These notes are based on the lectures delivered in Graduate Program in Data Science (GP-DS) and Data Sciences Program (DSP) at Tohoku University in 2018–2020.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70252638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We provide an introduction to graph theory and linear algebra. The present article consists of two parts. In the first part, we review the transfer-matrix method. It is known that many enumeration problems can be reduced to counting walks in a graph. After recalling the basics of linear algebra, we count walks in a graph by using eigenvalues. In the second part, we introduce PageRank by using a random walk model. PageRank is a method to estimate the importance of web pages and is one of the most successful algorithms. This article is based on the author’s lectures at Tohoku University in 2018 and 2020.
{"title":"Walks: A Beginner's Guide to Graphs and Matrices","authors":"Yuki Irie","doi":"10.4036/iis.2020.a.01","DOIUrl":"https://doi.org/10.4036/iis.2020.a.01","url":null,"abstract":"We provide an introduction to graph theory and linear algebra. The present article consists of two parts. In the first part, we review the transfer-matrix method. It is known that many enumeration problems can be reduced to counting walks in a graph. After recalling the basics of linear algebra, we count walks in a graph by using eigenvalues. In the second part, we introduce PageRank by using a random walk model. PageRank is a method to estimate the importance of web pages and is one of the most successful algorithms. This article is based on the author’s lectures at Tohoku University in 2018 and 2020.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70252622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an introduction to supervised machine learning methods with emphasis on neural networks, kernel support vector machines, and decision trees. These methods are representative methods of supervised learning. Recently, there has been a boom in artificial intelligence research. Neural networks are a key concept of deep learning and are the origin of the current boom in artificial intelligence research. Support vector machines are one of the most sophisticated learning methods from the perspective of prediction performance. Its high performance is primarily owing to the use of the kernel method, which is an important concept not only for support vector machines but also for other machine learning methods. Although these methods are the so-called black-box methods, the decision tree is a white-box method, where the judgment criteria of prediction by the predictor can be easily interpreted. Decision trees are used as the base method of ensemble learning, which is a refined learning technique to improve prediction performance. We review the theory of supervised machine learning methods and illustrate their applications. We also discuss nonlinear optimization methods for the machine to learn the training dataset.
{"title":"Introduction to Supervised Machine Learning for Data Science","authors":"M. S. Baladram, A. Koike, Kazunori D. Yamada","doi":"10.4036/iis.2020.a.03","DOIUrl":"https://doi.org/10.4036/iis.2020.a.03","url":null,"abstract":"We present an introduction to supervised machine learning methods with emphasis on neural networks, kernel support vector machines, and decision trees. These methods are representative methods of supervised learning. Recently, there has been a boom in artificial intelligence research. Neural networks are a key concept of deep learning and are the origin of the current boom in artificial intelligence research. Support vector machines are one of the most sophisticated learning methods from the perspective of prediction performance. Its high performance is primarily owing to the use of the kernel method, which is an important concept not only for support vector machines but also for other machine learning methods. Although these methods are the so-called black-box methods, the decision tree is a white-box method, where the judgment criteria of prediction by the predictor can be easily interpreted. Decision trees are used as the base method of ensemble learning, which is a refined learning technique to improve prediction performance. We review the theory of supervised machine learning methods and illustrate their applications. We also discuss nonlinear optimization methods for the machine to learn the training dataset.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70252701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let $Lambda$ be any integral lattice in Euclidean space. It has been shown that for every integer $n>0$, there is a hypersphere that passes through exactly $n$ points of $Lambda$. Using this result, we introduce new lattice invariants and give some computational results related to two-dimensional Euclidean lattices of class number one.
{"title":"New Invariants for Integral Lattices","authors":"Ryota Hayasaka, T. Miezaki, M. Toki","doi":"10.4036/IIS.2019.R.02","DOIUrl":"https://doi.org/10.4036/IIS.2019.R.02","url":null,"abstract":"Let $Lambda$ be any integral lattice in Euclidean space. It has been shown that for every integer $n>0$, there is a hypersphere that passes through exactly $n$ points of $Lambda$. Using this result, we introduce new lattice invariants and give some computational results related to two-dimensional Euclidean lattices of class number one.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41321722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Allaire, L. Cavallina, Nobuhito Miyake, Tomoyuki Oka, Toshiaki Yachimura
These are the lecture notes of a short course on the homogenization method for topology optimization of structures, given by Gr'egoire Allaire, during the "GSIS International Summer School 2018" at Tohoku University (Sendai, Japan). The goal of this course is to review the necessary mathematical tools of homogenization theory and apply them to topology optimization of mechanical structures. The ultimate application, targeted in this course, is the topology optimization of structures built with lattice materials. Practical and numerical exercises are given, based on the finite element free software FreeFem++.
{"title":"The Homogenization Method for Topology Optimization of Structures: Old and New","authors":"G. Allaire, L. Cavallina, Nobuhito Miyake, Tomoyuki Oka, Toshiaki Yachimura","doi":"10.4036/iis.2019.b.01","DOIUrl":"https://doi.org/10.4036/iis.2019.b.01","url":null,"abstract":"These are the lecture notes of a short course on the homogenization method for topology optimization of structures, given by Gr'egoire Allaire, during the \"GSIS International Summer School 2018\" at Tohoku University (Sendai, Japan). The goal of this course is to review the necessary mathematical tools of homogenization theory and apply them to topology optimization of mechanical structures. The ultimate application, targeted in this course, is the topology optimization of structures built with lattice materials. Practical and numerical exercises are given, based on the finite element free software FreeFem++.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4036/iis.2019.b.01","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70252961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents optimal design using Adaptive Mesh Refinement (AMR) with shape optimization method. The method suppresses time periodic flows driven only by the non-stationary boundary condition at a sufficiently low Reynolds number using Snapshot Proper Orthogonal Decomposition (Snapshot POD). For shape optimization, the eigenvalue in Snapshot POD is defined as a cost function. The main problems are non-stationary Navier– Stokes problems and eigenvalue problems of POD. An objective functional is described using Lagrange multipliers and finite element method. Two-dimensional cavity flow with a disk-shaped isolated body is adopted. The non-stationary boundary condition is defined on the top boundary and non-slip boundary condition respectively for the side and bottom boundaries and for the disk boundary. For numerical demonstration, the disk boundary is used as the design boundary. Using H gradient method for domain deformation, all triangles over a mesh are deformed as the cost function decreases. To avoid decreasing the numerical accuracy based on squeezing triangles, AMR is applied throughout the shape optimization process to maintain numerical accuracy equal to that of a mesh in the initial domain. The combination of eigenvalues that can best suppress the time periodic flow is investigated.
{"title":"Optimal Design by Adaptive Mesh Refinement on Shape Optimization of Flow Fields Considering Proper Orthogonal Decomposition","authors":"T. Nakazawa, C. Nakajima","doi":"10.4036/iis.2019.b.02","DOIUrl":"https://doi.org/10.4036/iis.2019.b.02","url":null,"abstract":"This paper presents optimal design using Adaptive Mesh Refinement (AMR) with shape optimization method. The method suppresses time periodic flows driven only by the non-stationary boundary condition at a sufficiently low Reynolds number using Snapshot Proper Orthogonal Decomposition (Snapshot POD). For shape optimization, the eigenvalue in Snapshot POD is defined as a cost function. The main problems are non-stationary Navier– Stokes problems and eigenvalue problems of POD. An objective functional is described using Lagrange multipliers and finite element method. Two-dimensional cavity flow with a disk-shaped isolated body is adopted. The non-stationary boundary condition is defined on the top boundary and non-slip boundary condition respectively for the side and bottom boundaries and for the disk boundary. For numerical demonstration, the disk boundary is used as the design boundary. Using H gradient method for domain deformation, all triangles over a mesh are deformed as the cost function decreases. To avoid decreasing the numerical accuracy based on squeezing triangles, AMR is applied throughout the shape optimization process to maintain numerical accuracy equal to that of a mesh in the initial domain. The combination of eigenvalues that can best suppress the time periodic flow is investigated.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70252975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Firas Kraiem, Shuji Isobe, E. Koizumi, Hiroki Shizuya
Inspired by the work of Ghadafi and Groth (ASIACRYPT 2017) on a certain type of computational hardness assumptions in cyclic groups (which they call ‘‘target assumptions’’), we initiate an analogous work on another type of hardness assumptions, namely the ‘‘knowledge-of-exponent’’ assumptions (KEAs). Originally introduced by Damga˚rd to construct practical encryption schemes secure against chosen ciphertext attacks, KEAs have subsequently been used primarily to construct succinct non-interactive arguments of knowledge (SNARKs), and proved to be inherent to such constructions. Since SNARKs (and their zero-knowledge variant, zk-SNARKs) are already used in practice in such systems as the Zcash digital currency, it can be expected that the use of KEAs will increase in the future, which makes it important to have a good understanding of those assumptions. Using a proof technique first introduced by Bellare and Palacio (but acknowledged by them as being due to Halevi), we first investigate the internal structure of the q -power knowledge-of-exponent ( q -PKE) family of assumptions introduced by Groth, which is thus far the most general variant of KEAs. We then introduce a generalisation of the q -PKE family, and show that it can be simplified.
{"title":"On the Classification of Knowledge-of-exponent Assumptions in Cyclic Groups","authors":"Firas Kraiem, Shuji Isobe, E. Koizumi, Hiroki Shizuya","doi":"10.4036/iis.2019.r.03","DOIUrl":"https://doi.org/10.4036/iis.2019.r.03","url":null,"abstract":"Inspired by the work of Ghadafi and Groth (ASIACRYPT 2017) on a certain type of computational hardness assumptions in cyclic groups (which they call ‘‘target assumptions’’), we initiate an analogous work on another type of hardness assumptions, namely the ‘‘knowledge-of-exponent’’ assumptions (KEAs). Originally introduced by Damga˚rd to construct practical encryption schemes secure against chosen ciphertext attacks, KEAs have subsequently been used primarily to construct succinct non-interactive arguments of knowledge (SNARKs), and proved to be inherent to such constructions. Since SNARKs (and their zero-knowledge variant, zk-SNARKs) are already used in practice in such systems as the Zcash digital currency, it can be expected that the use of KEAs will increase in the future, which makes it important to have a good understanding of those assumptions. Using a proof technique first introduced by Bellare and Palacio (but acknowledged by them as being due to Halevi), we first investigate the internal structure of the q -power knowledge-of-exponent ( q -PKE) family of assumptions introduced by Groth, which is thus far the most general variant of KEAs. We then introduce a generalisation of the q -PKE family, and show that it can be simplified.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70253022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study conformal embeddings of a noncompact Riemann surface of finite genus into compact Riemann surfaces of the same genus and show some of the close relationships between the classical theory of univalent functions and our results. Some new problems are also discussed. This article partially intends to introduce our results and to invite the function-theorists on plane domains to the topics on Riemann surfaces.
{"title":"Conformal Embeddings of an Open Riemann Surface into Another — A Counterpart of Univalent Function Theory —","authors":"M. Shiba","doi":"10.4036/iis.2019.a.02","DOIUrl":"https://doi.org/10.4036/iis.2019.a.02","url":null,"abstract":"We study conformal embeddings of a noncompact Riemann surface of finite genus into compact Riemann surfaces of the same genus and show some of the close relationships between the classical theory of univalent functions and our results. Some new problems are also discussed. This article partially intends to introduce our results and to invite the function-theorists on plane domains to the topics on Riemann surfaces.","PeriodicalId":91087,"journal":{"name":"Interdisciplinary information sciences","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70252755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}