Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.

Fano's inequality

In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook.

It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation.

Let therandom variablesX and Y represent input and output messages with ajoint probability. Let e represent an occurrence of error; i.e., that, withbeing an approximate version of. Fano's inequality is
wheredenotes the support of X,

is the conditional entropy,

is the probability of the communication error, and

is the corresponding binary entropy.

Alternative formulation

Let X be arandom variablewithdensityequal to one ofpossible densities. Furthermore, theKullback–Leibler divergencebetween any pair of densities cannot be too large,
for all
Letbe an estimate of the index. Then
whereis theprobabilityinduced by

Generalization

The following generalization is due to Ibragimov and Khasminskii (1979), Assouad and Birge (1983).

Let F be a class of densities with a subclass of r + 1 densities ƒ**θ such that for any θ ≠ θ

Then in the worst case the expected value of error of estimation is bound from below,

where ƒ**n is any density estimator based on a sample of size n.

References

[1]
Sep 23, 2019, 8:16 PM
[2]
Citation Linkarchive.orgTransmission of information: a statistical theory of communications
Sep 23, 2019, 8:16 PM
[3]
Sep 23, 2019, 8:16 PM
[4]
Sep 23, 2019, 8:16 PM
[5]