-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split up ConfirmationTarget
even more
#3268
Split up ConfirmationTarget
even more
#3268
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3268 +/- ##
==========================================
+ Coverage 89.82% 90.54% +0.71%
==========================================
Files 125 126 +1
Lines 102798 108880 +6082
Branches 102798 108880 +6082
==========================================
+ Hits 92337 98580 +6243
+ Misses 7743 7658 -85
+ Partials 2718 2642 -76 ☔ View full report in Codecov by Sentry. |
cc: @Beige-Coffee |
@@ -49,6 +49,12 @@ pub trait BroadcasterInterface { | |||
/// estimation. | |||
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq)] | |||
pub enum ConfirmationTarget { | |||
/// The most conservative feerate estimate available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I can make sense of these docs. Reading "The most conservative feerate estimate available.", I'd plug in the mempool minimum and be done with it. Can we give some guidance what this actually means and how users would retrieve such a 'conservative estimate'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I can add "(i.e. highest)", but I feel like "the most conservative" is clear (as long as its clarified that it means highest, not lowest). It should be the most conservative you can go on your API, whatever API that is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would MaximumAcceptableFee
be accurate/clearer? I agree that conservative could be read in the opposite way...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to communicate to folks that this has to be an "acceptable" fee that you might pay, but rather the maximum fee that a peer might reasonably pick without it clearly being an attack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. I tried to come up with something like MaxForceCloseAvoidanceFeerate
or MaxFeerateFromPeer
but it's a bit wordy and I think this is fine as-is with the docs clarification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe just MaximumFeeEstimate
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems reasonable!
4a0a596
to
ef1fff4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically LGTM, I'm good with a squash
@@ -49,6 +49,12 @@ pub trait BroadcasterInterface { | |||
/// estimation. | |||
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq)] | |||
pub enum ConfirmationTarget { | |||
/// The most conservative feerate estimate available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. I tried to come up with something like MaxForceCloseAvoidanceFeerate
or MaxFeerateFromPeer
but it's a bit wordy and I think this is fine as-is with the docs clarification.
} | ||
if let Some(txid) = self.prev_counterparty_commitment_txid { | ||
if !self.counterparty_claimable_outpoints.get(&txid).unwrap().is_empty() { | ||
return ConfirmationTarget::UrgentOnChainSweep; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, we don't have test coverage here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, there still isn't test coverage here (won't hold the PR up over it though).
ef1fff4
to
5971471
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, feel free to squash.
d9815d0
to
0a452be
Compare
Squashed. |
CI is sad |
0a452be
to
3bc61b7
Compare
🤦
|
When we broke `ConfirmationTarget` out into task-specific names, we left `MaxDustHTLCExposure::FeeRateMultiplier` as using the "when we broadcast feerate" as we were mostly concerned about the dust thresholds on outbound channels where we pick the fee and drive our own funds to dust. In 51bf78d, that changed to include transaction fees on both inbound and outbound channels in our dust exposure amount, but we continued to use `ConfirmationTarget::OnChainSweep` for the fee estimator threshold. While the `MaxDustHTLCExposure::FeeRateMultiplier` value is quite conservative and shouldn't lead to force-closures unless feerate estimates disagree by something like 500 sat/vB (with only one HTLC active in a channel), this happened on Aug 22 when feerates spiked from 4 sat/vB to over 1000 sat/vB in one block. To avoid simple feerate estimate horizons causing this in the future, here we add a new `ConfirmationTarget::MaximumFeeEstimate` which is used for dust calculations. This allows users to split out the estimates they use for checking counterparty feerates from the estimates used for actual broadcasting.
3bc61b7
to
38bdd72
Compare
When we force-close a channel, occasionally its due to feerate disagreements or other non-HTLC-related issues. In those cases, there's no reason to use a very urgent feerate estimate - we don't have any timers expiring soon. Instead, we should give users the information they need to be more economical on fees in this case, which we do here by splitting `OnChainSweep` into `UrgentOnChainSweep` and `NonUrgentOnChainSweep` `ConfirmationTarget`s.
38bdd72
to
76f6e9d
Compare
Benchmark is failing |
76f6e9d
to
b0f5c19
Compare
Grrrrr.
|
This will allow us to test `ConfirmationTarget`s used in functional tests by setting an override on just the target we expect to be used.
Our tests should never ignore the events generated as they provide critical context about what's happening in LDK. Here we fix `test_yield_anchors_events` to avoid doing so.
This updates `test_yield_anchors_events` to test both anchor channels with and without HTLCs, and relies on overriding only the singular expected `ConfirmationTarget` used, testing the new `ConfirmationTarget::UrgentOnChainSweep` use.
b0f5c19
to
cf97cef
Compare
Ugh, yet another one...
|
This morning we had a sudden feerate spike where feerates spiked something like 100x between one or two blocks, causing my node (and a few other LDK users) to have a feerate estimate 100x lower than that of some of my peers. This cause the new "feerate expansion is treated as dust" logic to kick in and FC two of my channels.
Here we fix this issue by giving the "feerate expansion is treated as dust" logic its own
ConfirmationTarget
which can be yet more conservative so that we're always more conservative than our peers.Further, because I FC'd while fees were incredibly high, I ended up spending nontrivial funds on the FC transactions, even though there was no particular need to rush - neither of the lost channels had HTLCs so they weren't trying to get confirmed immediately.
Here we fix this second issue by breaking up
OnChainSweep
into an urgent and non-urgent version.