Two theories of current interest and of mathematical and computational substance concerning knowledge assessment in education are discussed. These are the theory of knowledge structures and the theory of Bayesian networks as specifically related to educational assessment. In four separate sections, the two theories are compared by considering the sets of variables involved in their models, the set-theoretical and relational constructs defined on those variables, the probabilistic assumptions and properties, and the problems addressed by the theories in constructing their models. For the comparison, a common-base system of symbols and terms is adopted, which overcomes the peculiarities of expression in the corresponding streams of literature. This system gives us a better recognition of the similarities and differences between the two paradigms, and a precise appreciation of their arguments and abilities.
The priority heuristic is a lexicographic semi-order for choosing between gambles. It has merits such as predicting, out-of-sample, people's majority choice more accurately than benchmarks such as prospect theory, having been axiomatized, and logically implying major violations of expected utility theory. The heuristic has shortcomings too, such as failing to account for individual differences and intricate choice patterns, and predicting less accurately than various model ensembles and neural networks in some environments. This note focuses on an important purported shortcoming of the heuristic, that it cannot produce valuations of gambles. I point out that the certainty equivalent of a gamble for the priority heuristic is known and suggest that this fact can be used to enhance the scope of the heuristic. Indeed, by making simple auxiliary assumptions and calculations, I demonstrate that the priority heuristic can explain the Saint Petersburg paradox and the equity premium puzzle, and to do so arguably more parsimoniously and plausibly than standard approaches.
Two intriguing papers of the late 1990’s and early 2000s by J. Tanaka and colleagues put forth the hypothesis that a repository of face memories can be viewed as a vector space where points in the space represent faces and each of these is surrounded by an attractor field. This hypothesis broadens the thesis of T. Valentine that face space is constituted of feature vectors in a finite dimensional vector space (e.g., Valentine, 2001). The attractor fields in the atypical part of face space are broader and stronger than those in typical face regions. This notion makes the substantiated prediction that a morphed midway face between a typical and atypical parent will be perceptually more similar to the atypical face. We propose an alternative interpretation that takes a more standard geometrical approach but also departs from the popular types of metrics assumed in almost all multidimensional scaling studies. Rather we propose a theoretical structure based on our earlier investigations of non-Euclidean and especially, Riemannian Face Manifolds (e.g., Townsend, Solomon, & Spencer-Smith, 2001). We assert that this approach avoids some of the issues involved in the gradient theme by working directly with the type of metric inherently associated with the face space. Our approach emphasizes a shift towards a greater emphasis on non-Euclidean geometries, especially Riemannian manifolds, integrating these geometric concepts with processing-oriented modeling. We note that while fields like probability theory, stochastic process theory, and mathematical statistics are commonly studied in mathematical psychology, there is less focus on areas like topology, non-Euclidean geometry, and functional analysis. Therefore, both to elevate comprehension as well as to propagate the latter topics as critical for our present and future enterprises, our exposition moves forward in a highly tutorial fashion, and we embed the material in its proper historical context.

