
So, say you have some secrets you desperately want to air, but not so desperately that everyone should know. And you are also a little weird, so you only want them to be able to know if they get together and work out the clues. (Oh dear, it sounds like an exhausting gamenight).
Then you have a secret sharing scheme. Specifically, you got yourself into a situation where you have a set P of people, a set C of subsets of P (lets call them conspirators) and a collection of secrets S and Hints H. What you want to do is for a set of people to figure out the information if it contains one of the c in C. Think of it like that Simpsons episode where every single Flying Hellfish was needed to assemble treasure; or think of a more realistic example of a company or agency in which vital information is only accessible by a group. That sort of thing.

A secret sharing scheme is perfect if everyone that can figure out just a little about the secret can figure it out completely. It is ideal if additionally the amount of secrets equals the amount of hints.

Sounds complicated, but all ideal secret sharing schemes are of a simple form: Lets call a subset A of P dependent if, given any member a of A, the members of A\a can figure out from the hints they were given what hint must have been given to a. All other subsets are independent: Brickell and Davenport proved that ideal secret sharing schemes are matroids.
A matroid is a (rank) function r on P such that
(R1) The value of the rank function is always a non-negative integer and the rank of the empty set is 0.
(R2) For any two subsets A and B of P, r(A∪B) + r (A∩B) ≤ r (A) + r(B).
(R3) For any set A and element a, r(A) ≤ r(A∪{a}) ≤ r(A)+1.
Rank functions measure dependence and independence: a set is independent if and only if r(A) equals the size of A. Not every matroid is a secret sharing scheme, and those that are are called secret sharing matroids.
The second condition here is interesting: it is the so called submodularity; it is of ubiquitous importance and many aspects of it have been explored, such as connections to electrical network theory (by Narayanan of IIT Bombay, where I spent a year as an undergraduate doing diophantine approximation and Nevanlinna theory; I shall return to that beautiful place and topic in another post), optimization (by Fujishge) and, last but not least, the end of us all: entropy.

Entropy measures disorder. In a random variable X distributed along p, it is formally defined as the expected value H(X) of –log p(X).
Without going into this too much, notice that if X is highly concentrated, then there is less uncertainty and the entropy is smaller.
Now, if we have random variables then we can measure the amount of disorder that any subset A of them induces by looking at the joint entropy, sort of their joint disorder, denoted by
. Now it turns out that Shannon proved that H is submodular. And those matroids realizable as Shannon entropy functions are called entropic matroids.
You can think about entropic matroids in terms of dependence; measuring how dependent stochastic processes are on each other. For instance, if we consider the three variables , then any three of them are clearly dependent; this is what the Shannon entropy and entropic matroid measures.
So what? You might now ask yourself why I talk about this.
First: Why on earth do I introduce entropic and secret sharing matroids? Well that question is easily answered: It turns out that they are the same: Entropic matroids are secret sharing, and vice versa; see this nice article by Emmanuel Abbe and Sophie Spirkl.
Second: So why is this interesting? Well, lets say you have a function that associates to every subset of a given set P an nonnegative integer. It is pretty easy to check whether it is submodular (this is known as the submodular cone). But which of those come from secret sharing schemes/entropy measurements, that is much harder.
And in fact, it is undecidable (meaning there is no algorithm that can tell you the answer; hence understanding the geometry of the entropic cone is hard). This means it is somewhat tricky to construct secret sharing schemes (after all, it is hard yo construct what you cannot see). This was now independently shown by two teams: One paper is by former student Lukas Kühne and my current student Geva Yashfe (who got the Tzafiri prize for related work preceding this). And before that, and independently, Cheuk Ting Li at the City University of Hongkong reached the same result.



They achieved this by reducing the problem of deciding whether a given matroid is entropic/secret sharing to different word problems (in finite groups, monoids and sofic groups), the problem of deciding whether two different expressions describe the same object, which are known to be undecidable. Congratulations!
One thought on “Shhhh. Secrets, entropy and dyslexia”