Timeline of Blog

What is a tensor? And why isn't a matrix a tensor?

Note that this is only a first draft, a first sketch, but you should get an idea ;-). 

A tensor is a physical quantity. In one dimension, temperature is a tensor. One dimensional tensors are called scalars (tensor of zero oder). But the temperature is independent of its representation. You can specify it in Kelvin or in degree Celsius. 

Games with Two Dice and the Double Rule

Some days ago I played Monopoly, a board game that is played using two dice. Afterwards I was eager to calculate the probability that a player arrives at one of the streets. I started by computing the probabilities to roll numbers. This is not as trivial as you might guess, because there are rules for doubles, e.g. $(1,1), (2,2)...(6,6).$  Later I realized that there are very good internet resources on Monopoly by J. Bewersdorff including a graphical simulation. It shows that the probability of arriving a particular street is close to uniform after many rounds. Except fields after the jail and special non-street fields. If you want to know more about Monopoly look at [1,German], Bewersdorff's book "Glück, Logik und Bluff: Mathematik im Spiel - Methoden, Ergebnisse und Grenzen" [2, German], or the bachelor thesis "Monopoly und Markow-Ketten" by Julia Tenie [3, German]. There is also a "FAQ" about probabilities of throwing dice [4, English] and an article at Wolfram's Mathworld on computing the probability of the points using several dice [5, English].

In what follows, I will address dice games using two dice with a double rule. Particularly, I will answer following questions:

  • What is the probability of particular points (sum) of two dice using a double rule?
  • What is expected number of points (sum) of two dice using a double rule? What is different compared to without double rule?

I use Monopoly's double rule:

  1. If a player rolls a double, roll the dice again and add the new result to the previous result.
  2. If a player rolls a double three times in a row, your turn is over. 

If you are only interested in the probabilities, then jump to Tables I and II. If you are interested in the theory in addition to the probabilities, then you will also find the answer of following question:

  • How are the probability masses (probability mass function) defined? 

Classification of Bayesian Inverse Problems by their Continuous/Discrete Nature

Two traditional classes of inverse problems are the estimation of absolute-continuous random parameters and the detection/classification of discrete random parameters by continuous random measurements. But what about the inference of mixed discrete-continuous problems? In the following, I will summarize my proposal of six classes of inverse problems and, hence, six classes of inferrers. See [1] for a table and formulars.

Was muss ein Handy schätzen?

Manche wissenschaftlichen Gebiete sind sehr präsent - ein gutes Beispiel der letzten Jahre sind diverse Raumfahrtmissionen. Andere wiederum gar nicht. Neulich wollte ich einem Bekannten erklären, womit ich mich während meiner Forschungsarbeit beschäftigt hatte. Eines der größten Probleme beim Erklären sind die Begriffe. Oft sind Fachbegriffe schon so in Fleisch und Blut übergegangen, dass man gar nicht auf die Idee kommt, dass der/die andere sie nicht verstehen könnte.  Oder man setzt ein Vorwissen voraus, das diejenige/derjenige einfach nicht hat. Dies ist nun ein Versuch, anhand eines Beispiels aus dem Alltag und ohne, in die Mathematik zu gehen, der Fragen nachzugehen: "Was schätzt ein Handy? Was ist überhaupt Schätzen?"

Ist diese Frage überhaupt sinnvoll? "Computer schätzen doch nicht" werden viele sagen. Bevor wir aber darauf eingehen, einige Beispiele, wann Menschen schätzen. Dies können wir dann ganz einfach auf technische Geräte übertragen.

Three Paradigmata of Inverse Problems: Algebraic vs. Frequentist vs. Bayesian Inference

Consider a sensor that is measuring physical parameters like temperature, pressure, or velocity. This sensor introduces perturbations and noise and, hence, one key-problem is the optimal inference of parameter $\boldsymbol{x}$ using measurement  $\boldsymbol{y}$ from the sensor.  Such inference of parameters is used in many research areas like telecommunications, finances, medizine, or social science.  When we speak about inference we have to ask: What is optimal inference? Which criterion shall we use? A natural criterion is the inference error. But how shall the error be defined? In the sequel, I address these main questions and hope to give a good overview. 

Performance Bounds for Bayesian Estimators: Cramer-Rao Bounds and Weiss-Weinstein Bounds

Bayesian performance bounds are supposed to benchmark Bayesian estimators and detectors, which infer random parameters of interest from noisy measurements. These parameters are usually physical values as temperature, position, etc. 

Consider a measurement model 

\[\boldsymbol{y} = C(\boldsymbol{x})~,\]

where a sensor modeled by a probabilistic mapping $C$ measures a random parameter vector  $\boldsymbol{x}$. An vector-valued estimator  $\hat{\boldsymbol{x}}(\boldsymbol{y})$ infers the parameter vector using random measurements $\boldsymbol{y}$. A simple example adds noise to the paramter, i.e.

\[\boldsymbol{y} = \boldsymbol{x} + \boldsymbol{v}~,\]

where the random vector $\boldsymbol{v}$ models measurement noise. 

A performance bound [VB07] is a lower bound on the mean-square-error matrix $\mathrm{E}(\tilde{\boldsymbol{x}}\tilde{\boldsymbol{x}}^{\mathrm{T}})$ for the estimation error $\tilde{\boldsymbol{x}} = \hat{\boldsymbol{x}}(\boldsymbol{y}) - \boldsymbol{x}$ of any Bayesian estimator. This "any" is in contrast to the traditional frequentist Cramer-Rao bound. 

A popular performance bound is the van-Tree-Cramer-Rao (Bayesian Cramer-Rao) bound which is the Bayesian version of the traditional Cramer-Rao bound. It is a relative of the family of Weiss-Weinstein bounds, which in turn is a subclass of the family of Bayesian lower bounds. With these bounds it is possible to compare different Bayesian estimators. Note that $\tilde{\boldsymbol{x}}\tilde{\boldsymbol{x}}^{\mathrm{T}}$ is the square-error loss  that provides the minimum-mean-square-error (MMSE) estimator. Hence, all other Bayesian estimators are worse with respect to  loss $\tilde{\boldsymbol{x}}\tilde{\boldsymbol{x}}^{\mathrm{T}}$ than the MMSE estimator (cf. Algebraic vs. Frequentist vs. Bayesian Inference). 

Sequential Weiss-Weinstein Bounds

Sequential Bayesian estimation and detection [VB07] inferes temporal evolving states in contrast to non-evolving parameters. The sequential Weiss-Weinstein bound is a lower bound on the mean-square-error matrix of any Bayesian estimator. This article is based on  Bayesian Cramer-Rao Bounds and Weiss-Weinstein Bounds about non-sequential Bayesian inference and performance bounds. Note that here only a short overview is possible and hence I neglect many details that can be found in one of the references. 

Letters using LaTeX

Writing letters in LaTeX is very easy and fast after creating your own template. That's because you have only to change the address of the recipient, and the text. In other words: You won't have to modify the layout. Of course, you can do the same in MS/Libre Office as well. But then it is still not typeset. And: you won't have foldmarks and benefits of window envelopes. ;-)

Rules for Good Graphics and Writings

For a reader, informative, concise, and good structured articles and figures (graphics) are very important. In this article I briefly introduce some resources on the web. You are welcome to point me to other web sites. Altough some links are originally related to typesetting in LaTeX, the rules and hints are much more general.