-
Notifications
You must be signed in to change notification settings - Fork 258
read the k from epoch when the node starts #4460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis pull request adds two new workspace dependencies to the Changes
Sequence Diagram(s)sequenceDiagram
participant NS as NodeService
participant BD as BlockDAG
participant ST as Storage
participant CDB as ChainStateDB
participant ASR as AccountStateReader
participant GDM as ghost_dag_manager
NS->>BD: check_upgrade(main, genesis_id, storage)
BD->>BD: Evaluate block number vs force upgrade threshold
alt Upgrade condition met
BD->>ST: Retrieve state root & chain storage
BD->>CDB: Initialize ChainStateDB with storage
CDB->>ASR: Create AccountStateReader
ASR->>BD: Provide Epoch resource (via genesis address)
BD->>GDM: Update k using epoch's max uncles value
else Condition not met
BD-->>NS: Return without changes
end
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (3)
flexidag/Cargo.toml
(1 hunks)flexidag/src/blockdag.rs
(3 hunks)node/src/node.rs
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
flexidag/src/blockdag.rs (7)
vm/types/src/account_config/constants/addresses.rs (1)
genesis_address
(16-18)vm/vm-runtime/src/force_upgrade_management.rs (1)
get_force_upgrade_block_number
(10-24)node/src/lib.rs (1)
storage
(125-129)state/api/src/chain_state.rs (1)
state_root
(128-128)flexidag/src/consensusdb/db.rs (1)
new
(43-45)flexidag/src/ghostdag/protocol.rs (2)
new
(19-23)new
(56-70)vm/types/src/on_chain_resource/epoch.rs (1)
epoch
(138-140)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: build and test
🔇 Additional comments (5)
flexidag/Cargo.toml (1)
31-32
:✅ Verification successful
Confirm correct usage of newly added workspace dependencies.
The additions of
starcoin-statedb
andstarcoin-vm-runtime
look consistent with the new storage and upgrade logic introduced in theBlockDAG
. Ensure that these dependencies are actually required for the new logic and do not introduce unnecessary overhead or duplication.Would you like to verify if any references to these crates remain unused in the codebase?
🏁 Script executed:
#!/bin/bash # Check for any usage references of starcoin-statedb and starcoin-vm-runtime rg --context 5 'starcoin-statedb|starcoin-vm-runtime'Length of output: 38428
Dependency usage confirmed:
After verifying with a search across the repository, bothstarcoin-statedb
andstarcoin-vm-runtime
are frequently referenced in multiple Cargo files and in the lock file. This consistent usage across the codebase—covering areas such as state management, storage, and VM components—indicates that their inclusion is intentional and aligns with the new storage and upgrade logic in the BlockDAG. There is no evidence of unnecessary overhead or duplication.node/src/node.rs (1)
309-313
:❓ Verification inconclusive
Ensure proper error handling when calling
check_upgrade
.The newly added
storage.clone()
parameter supplies theBlockDAG
upgrade logic with persistent storage. This may fail if storage is not fully initialized or if there are concurrency concerns. Verify that any potential errors returned bycheck_upgrade
(e.g. missing epoch, I/O issues) are handled gracefully and do not crash the node unexpectedly.
🏁 Script executed:
#!/bin/bash # Description: Search for try/catch or appropriate error handling around `dag.check_upgrade`. rg -A 5 "dag.check_upgrade"Length of output: 272
Ensure Proper Propagation and Handling of Errors from
check_upgrade
The current implementation uses the
?
operator here:dag.check_upgrade( chain_info.status().head(), genesis.block().id(), storage.clone(), )?;While propagating errors using
?
is idiomatic in Rust, please verify that these errors—whether they stem from storage initialization, missing epochs, or potential I/O/concurrency issues—are caught and managed appropriately by the caller. In particular, ensure that the upstream error handling logs sufficient context and avoids causing an unexpected node crash. If additional contextual wrapping or a more granular recovery mechanism is required at this stage, please adjust accordingly.flexidag/src/blockdag.rs (3)
34-37
: Validate new imports for consistency.New imports—
AccountStateReader
,ChainStateDB
,IntoSuper
, andStorage
—are used in thecheck_upgrade
logic. Confirm these are only introduced where needed, as extraneous imports can cause clutter and raise maintenance.
43-44
: Usage ofget_force_upgrade_block_number
andEpoch
identified.These imports align with the forced upgrade logic. No immediate issues. Just ensure that your code handles different chain IDs correctly and that the
Epoch
type is stable.
1021-1026
: Newcheck_upgrade
signature requires storage.Adding
storage: Arc<Storage>
tocheck_upgrade
centralizes all relevant state logic in a single call, which can improve clarity. However, ensure that concurrency is handled properly: if multiple callers invokecheck_upgrade
in parallel, shared data instorage
must remain consistent.Would you like a shell script to grep for all invocations of this method to ensure it’s not called concurrently from multiple services?
// update k | ||
if main.number() >= get_force_upgrade_block_number(&main.chain_id()) { | ||
let state_root = main.state_root(); | ||
let state_db = ChainStateDB::new(storage.clone().into_super_arc(), Some(state_root)); | ||
let account_reader = AccountStateReader::new(&state_db); | ||
let epoch = account_reader | ||
.get_resource::<Epoch>(genesis_address())? | ||
.ok_or_else(|| format_err!("Epoch is none."))?; | ||
|
||
self.ghost_dag_manager() | ||
.update_k(u16::try_from(epoch.max_uncles_per_block())?); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Update the DAG's k
value from on-chain Epoch
.
When the block number exceeds get_force_upgrade_block_number
, the code retrieves the on-chain epoch information and updates ghost_dag_manager().update_k(...)
. Confirm that:
- The code handles any errors in reading the
Epoch
resource (for example, if the resource is absent or corrupted). - The DAG remains valid if
k
changes during runtime. - Adequate tests exist to validate this dynamic reconfiguration.
Otherwise, the logic is sound.
Consider adding additional logging or fallback handling if the epoch resource is unavailable. This prevents ungraceful failures in production.
Benchmark for b224784Click to view benchmark
|
Pull request type
Please check the type of change your PR introduces:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Other information
Summary by CodeRabbit
New Features
Chores