Classification of Bayesian Inverse Problems by their Continuous/Discrete Nature

Two traditional classes of inverse problems are the estimation of absolute-continuous random parameters and the detection/classification of discrete random parameters by continuous random measurements. But what about the inference of mixed discrete-continuous problems? In the following, I will summarize my proposal of six classes of inverse problems and, hence, six classes of inferrers. See [1] for a table and formulars.


The first three classes are traditional inferrers:

  • Estimation: Inference of continuous random parameter vectors by use of absolute-continuous random measurements. Well known estimators are the maximum a-posteriori (MAP) estimator, the minimum mean-squared-error (MMSE) estimator, and the median estimator.
  • Detection or Classification: A detector or classifier infers discrete random vectors by use of continuous random measurements. In case of a MAP detector, the likelihood ratio is a popular quantity for the decision. Note that the joint distribution of parameters and measurements is a singular-continuous distribution inducing a hybrid density that is a multiplication of the measurements' conditional probability density function and the parameters' probability mass function. 
  • Multiple-Switching Inference: The parameter vector consists of discrete and continuous random elements. The joint a-posterior distribution of this vector is singular-continuous, which induces a hybrid density.
  • Hybrid Inference is the straight forward extension of the multiple-switching problem to continuous and discrete measurements.
  • Mixed Inference in One Dimension: The probability distribution of parameter and measurement is a normalized superposition of a discrete and an absolute-continuous distribution.
  • Mixed and Hybrid Inference: The probability distribution of the parameter and the measurement vector is a normalized superposition of a discrete, an absolute-continuous, and a singular-continuous distribution.