Decentralized Triggered Secret Release
Intro
Have you ever had to confront a villain with your discoveries, knowing that unless you take precautions they might just kill you? Or gone on a dangerous adventure, wanting to keep the details secret until you come back (or unless you don't)? Or wanted your passwords to be shared with a friend if you became longterm unavailable?
I haven't either, but who knows.
These problems can be solved with a single trusted secretholder: You give your secret to someone, and tell them not to look at it or share it until some condition is triggered. (The trigger might be e.g. failure to receive an I'mstillhere message from you every week.)
But that's errorprone. Even if you 100% trust their good intentions, it's still easy for a single point of failure to make a mistake (in either direction: they might be compromised and release the secret too early, or they might disappear or lose the secret).
Decentralizing
Can we achieve the same kind of triggered release without trusting a single interediary? I think so, and here's the process I'm imagining:
 The secretsharer gives (different) data to each of the secretholders such that some subset of them could reconstruct the secret
 The secretsharer makes public a condition under which the secretholders should release the secret publicly
 If the condition is met, the secretholders publish their parts of the secret.
 Now anyone^{1} can reconstruct the secret.
There are some incentives we need to get right in order for this to work:
 each secretholder should be rewarded for participating in the process (regardless of whether the condition is triggered)
 each secretholder should be punished if they release their part of the secret too early
 each secretholder should be punished if they fail to release their part of the secret after the condition is triggered
 each secretholder should be able to trust that no one else knows their part of the secret
The last condition rules out a straightforward application of Shamir's secret sharing. We can't have the secretsharer compute the secretholders' shares of the secret, since then the secretholders couldn't trust that they won't be wrongly punished (if the secretholder accidentally or purposefully released some or all of the shares).
The punishments must be consist of significant losses for the secretholders (not just failure to get paid), because a secretholder who doesn't follow the rules undermines the integrity of the whole system. We'll require deposits which they lose if the punishment conditions are triggered.
I think we can use Ethereum to achieve these incentives. This procedure will require all $N$
secretholders' for the secretreconstruction phase, but I think we can modify it later so only a proper subset of the $N$
are needed:
(For convenience, let's say $p_0$
is the secretsharer, and $p_1$
through $p_n$
are the secretholders.)
 The secretholders make a deposit to the contract.
 Each secretholder
$p_i$
($1 \leq i \leq N$
) generates a random string$s_i$
of the same length as the secret.  For convenience, let's say the secret is $s_0$.
 Everyone sends the hash of their $s_i$ to the contract.
 Then everyone computes and releases the sum
S = $\sum_{i=0}^n s_i$
without revealing anyone's$s_i$
(see below for more on how to do this).$S$
is now public, but doesn't reveal anything about the secret.  If the contract's "release" condition is triggered, then the secretsharers each release
$s_i$
($1 \leq i \leq N$
).  Now the secret
$s_0$
is public, since$s_0 = S  \sum_{i=1}^n s_i$
.
Contract
This is pretty rough, but here's a basic idea of the kinds of functions the Ethereum constract would need:
 (constructor)  should store the addresses of the secretholders and secretsharer; the contract should be created with enough funds to pay the secretholders for their participation
store_hash($p_i$)
 each$p_i$
calls this with thehash($s_i$)
, which is stored. something to decide when the hashcollection phase is over
should_release()
 this private function determines whether it's time to release the secret, and can be written however we want  maybe it's based on the date, or based on another functionping
not receiving a call from the secretholder for a certain period of time; I'll assume that after 1 year from contract creation,should_release()
is guaranteed to returnTrue
punish_for_release($p_i$, $s_i$)
(only callable beforeshould_release()
is true)  anyone holdings_i
can send$p_i, s_i$
to cause$p_i$
to lose their deposit for releasing too early (and receive a small reward for their trouble)release($s_i$)
 (only callable aftershould_release()
is true)$p_i$
sendss_i
, we confirm that its hash matches the stored hash, and$p_i$
receives their deposit payment, plus their payment for participating
How to Add
How do we compute the sum S = $\sum_{i=0}^n s_i$
without revealing anyone's $s_i$
? Here's one way^{2}:
 Each
$p_i$
generates a random$r_{ij}$
to send to each$p_j$
.  Each
$p_i$
now knows$r_{ij}$
and$r_{ji}$
for all other players$j$
.  Each
$p_i$
computes and publishes$s_i + \sum_{j=0}^n r_ij + \sum_{j=0}^n r_ij$
.  Everyone can add up the numbers in (3) to obtain
$S$
.
What about cheaters?
What if someone provides the wrong input to the sum computation, one they don't even store? Then the secret won't be recoverable, but we can't punish them for this or even know that they're behaving inconsistently (publishing the hash of a different $s_i$
than the one they used in the sum).
I can think of two ways to address this risk:
We could do a random number of "trial runs" first, where the secretsharer uses a random $s_0$
instead of their secret. Only the secretsharer will know whether we're in a trialrun. After the secretsharer announces "that was a trial run", everyone (including the secret sharer) reveals their $s_i$
. If the sum is $S$
and the hashes match up, then everyone followed the protocol. If everyone followed the protocol in a bunch of trial runs (and they don't know the real one is coming), then they'll probably be following it in the real run too.
Alternatively, we could switch to something fancier. A lot has been written about secure multiparty computation, including with errordetection.
M of N
As described, this scheme requires all secretholders to follow the protocol for the secret release. This means that even a single one could hold out for extra payment (or just lose their key).
We really want the secret to be reconstructable from just a smaller subset of the secretholders.
I think it's probably possible to do this using polynomials like in Shamir's secret sharing  but a very inefficient (probably too inefficient) way would be to repeat the procedure for each subset of $M$
secretholders.
Amortize
It might make sense for a secretholder to be able to use the same deposit for multiple different people's secrets. The benefit would be having a larger deposit overall, so perhaps greater security. The drawback would be that once a secretholder has lost their deposit for revealing one $s_i$
too early, they have no incentive to keep the others secret or reveal them at the right time.
Time limits
The scheme requires the secret to be released within some fixed time period, because the depositors need to get their deposit back at some point.
Alternatively, we could set things up so that the secretsharing has the option to pay the holders for each additional time period during which they don't release the secret. As with the originl payment, the continuing would have to be at least a bit higher than the interest the secretholders could receive on their deposit if they hadn't made it.
The secretholders would not have the option to withdraw their deposit unless the secretsharer stops paying.
Thanks
Thanks to my friend Jeff Kaufman for earlier conversations about this.

Note that this same process can be used for releasing a secret to a single secret recipient rather than the public: First the secretsharer would encrypt the secret with the recipient's public key, and then the rest of the process is followed as usual.) ↩

Adding this way isn't original to me. I'm not sure where I came across this, but there's a large literature on this kind of thing. ↩