https://arxiv.org/pdf/1405.6142.pdf A Computational Theory of Subjective Probability

https://arxiv.org/pdf/1701.07879v4.pdf A Radically New Theory of how the Brain Represents and Computes with Probabilities

1. Neural activation is continuous (graded). 2. All neurons in the coding field formally participate in the active code whether it represents a single hypothesis or a distribution over all hypotheses. Such a representation is referred to as a fully distributed representation. 3. Synapse strength is continuous (graded). 4. These approaches have generally been formulated in terms of rate-coding (Sanger 2003), which requires significant time, e.g., order tens of ms, for reliable decoding. 5. They assume a priori that tuning functions (TFs) of the neurons are unimodal, bell-shaped over any one dimension, and consequently do not explain how such TFs might develop through learning. 6. Individual neurons are assumed to be intrinsically noisy, e.g., firing with Poisson variability. 7. Noise and correlation are viewed primarily as problems that have to be dealt with, e.g., reducing noise correlation by averaging.

We present a radically different theory that assumes: 1) binary units; 2) only a small subset of units, i.e., a sparse distributed representation (SDR) (a.k.a. cell assembly, ensemble), comprises any individual code; 3) functionally binary synapses; 4) signaling formally requires only single (i.e., first) spikes; 5) units initially have completely flat TFs (all weights zero); 6) units are far less intrinsically noisy than traditionally thought; rather 7) noise is a resource generated/used to cause similar inputs to map to similar codes, controlling a tradeoff between storage capacity and embedding the input space statistics in the pattern of intersections over stored codes, epiphenomenally determining correlation patterns across neurons.

https://arxiv.org/abs/1011.0723 Entropic Inference

In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops – the Maximum Entropy and the Bayesian methods – into a single general inference scheme.

The main conclusion is that the logarithmic relative entropy is the only candidate for a general method for updating probabilities – the ME method – and this includes both MaxEnt and Bayes’ rule as special cases; it unifies them into a single theory of inductive inference and allows new applications. Indeed, much as the old MaxEnt method provided the foundation for statistical mechanics, recent work suggests that the extended ME method provides an entropic foundation for quantum mechanics