Discussion:
A question on Shannon entropy
(too old to reply)
M. Boreale
2004-07-06 15:55:50 UTC
Permalink
Hello,
I'm an outsider (though well informed) of cryptography, so apologies
in advance if this turn out to be a trivial question.

Is there any result relating Shannon entropy to collision probability?

I.e., to be more precise, suppose:
- X is a discrete r.v. with prob. distribution p1,...,pN;
- IC(X) is the probability of collision for X, Sum_i pi^2 (i.e. the
probability of getting twice the same outcome in two independent
experiments);
- H(X) is Shannon entropy of X, -Sum_i pi*log(pi);

My question is if there is any theorem relating IC(X) to H(X).

The (obviuous) intuition is that the higher IC(X), the lower H(X).
E.g., for X a constant (IC(X)=1) then H(X)=0, while when X has a
uniform distribution, IC(X) reaches its minimum (1/N) and H(X) its
maximum (-log(N)). Does this generalize to arbitrary X, and how?

I'm asking this because, when the distribution of X is unknown, it is
apparently simpler to estimate IC(X) (by, say, measuring the fraction
of collisions over repeated experiments on X) than H(X).

Many thanks

- Michele
Shai Halevi
2004-07-11 14:32:11 UTC
Permalink
The "collision probability" is directly related to the Renyi entropy
of a source, rather than to its Shannon entropy. The Renyi entropy
(of order 2) is defined as H_2(X) = -log( sum (p_i)^2 ), which is
exactly what you need.

You can prove some connections between the Renyi and Shannon entropy
(e.g., H_2(X) <= H(X)), but in general they are not good indicators of
one another. An extreme example is a distribution over n-bit strings
that gives probability 1/2 to the all-zero string and probability
1/(2^n - 2) to all the other strings. This distribution has almost
n bits of Shannon entropy but less than two bits of Renyi entropy.

-- Shai
Post by M. Boreale
Hello,
I'm an outsider (though well informed) of cryptography, so apologies
in advance if this turn out to be a trivial question.
Is there any result relating Shannon entropy to collision probability?
- X is a discrete r.v. with prob. distribution p1,...,pN;
- IC(X) is the probability of collision for X, Sum_i pi^2 (i.e. the
probability of getting twice the same outcome in two independent
experiments);
- H(X) is Shannon entropy of X, -Sum_i pi*log(pi);
My question is if there is any theorem relating IC(X) to H(X).
The (obviuous) intuition is that the higher IC(X), the lower H(X).
E.g., for X a constant (IC(X)=1) then H(X)=0, while when X has a
uniform distribution, IC(X) reaches its minimum (1/N) and H(X) its
maximum (-log(N)). Does this generalize to arbitrary X, and how?
I'm asking this because, when the distribution of X is unknown, it is
apparently simpler to estimate IC(X) (by, say, measuring the fraction
of collisions over repeated experiments on X) than H(X).
Many thanks
- Michele
Loading...