Cabal Liquid Staking Token
Findings & Analysis Report
2025-05-28
Table of contents
- Summary
- Scope
- Severity Criteria
-
- [M-01] Reentrancy Check in
lock_staking::reentry_check
Causes Concurrent INIT Deposit Failures (DOS) - [M-02] Unstaking calculates user share at request time, ignoring slashing — leading to DoS and unfair distribution
- [M-03] Attacker Can Desynchronize Supply Snapshot During Same-Block Unstake, Reducing Everyone’s Rewards
- [M-04] Unstaking from LP pools will cause underflow and lock user funds
- [M-05] Last Holder Can’t Exit, Zero‑Supply Unstake Reverts
- [M-06] LP Redelegation Uses Inaccurate Internal Tracker Amount, Leading to Potential Failures or Orphaned Funds
- [M-07] Desynchronization of Cabal’s internal accounting with actual staked INIT amounts leads to over-minting of sxINIT tokens
- [M-01] Reentrancy Check in
- Disclosures
Overview
About C4
Code4rena (C4) is a competitive audit platform where security researchers, referred to as Wardens, review, audit, and analyze codebases for security vulnerabilities in exchange for bounties provided by sponsoring projects.
A C4 audit is an event in which community participants, referred to as Wardens, review, audit, or analyze smart contract logic in exchange for a bounty provided by sponsoring projects.
During the audit outlined in this document, C4 conducted an analysis of the Cabal Liquid Staking Token smart contract system. The audit took place from April 28 to May 05, 2025.
Final report assembled by Code4rena.
Summary
The C4 analysis yielded an aggregated total of 8 unique vulnerabilities. Of these vulnerabilities, 1 received a risk rating in the category of HIGH severity and 7 received a risk rating in the category of MEDIUM severity.
All of the issues presented here are linked back to their original finding, which may include relevant context from the judge and Cabal team.
Scope
The code under review can be found within the C4 Cabal Liquid Staking Token repository, and is composed of 8 smart contracts written in the Move programming language and includes 2,574 lines of Move code.
Severity Criteria
C4 assesses the severity of disclosed vulnerabilities based on three primary risk categories: high, medium, and low/non-critical.
High-level considerations for vulnerabilities span the following key areas when conducting assessments:
- Malicious Input Handling
- Escalation of privileges
- Arithmetic
- Gas use
For more information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website, specifically our section on Severity Categorization.
High Risk Findings (1)
[H-01] LP unstaking only burns the shares but leaves the underlying tokens in the system, which distorts the shares-to-tokens ratio and leads to incorrect amounts being calculated during staking and unstaking
Submitted by TheSchnilch, also found by ret2basic
Finding description
When a user unstakes LP tokens, the corresponding shares (Cabal tokens) are burned. However, the actual undelegation from the validator will occur only after a delay of up to 3 days. During this period, the shares are already burned, but the underlying tokens are still included in shares-to-token conversions.
This is a problem because, in process_lp_unstake
, the amount of tokens to unbond is calculated as follows:
https://github.com/code-423n4/2025-04-cabal/blob/5b5f92ab4f95e5f9f405bbfa252860472d164705/sources/cabal.move#L1051-L1054
The lp_amount
is calculated based on the amount of tokens actually staked on the validator. This includes tokens that are pending to be undelegated (unstaked_pending_amounts
), for which the Cabal tokens have already been burned.
This means that the unbonding_amount
is also calculated incorrectly because the lp_amount
is too high. As a result, the unbonding_amount
will also be too high, and the unstaker will receive too many tokens that are actually belonging to other users.
Since the Cabal tokens a user receives are also calculated this way in process_lp_stake
, users will receive too few shares when there are pending undelegations. As a result, they will have fewer tokens after the next batch_undelegate_pending_lps
:
https://github.com/code-423n4/2025-04-cabal/blob/5b5f92ab4f95e5f9f405bbfa252860472d164705/sources/cabal.move#L946-L953
Impact
Because users receive too many tokens that actually belong to other users, and since this issue occurs during normal unstaking and staking, it is high severity.
Recommended mitigation steps
The unstaked_pending_amounts
should be subtracted from the lp_amount
to correctly account for the pending tokens to be undelegated, for which the Cabal tokens have already been burned.
Proof of Concept
#[test(
c = @staking_addr, user_a = @0xAAA, user_b = @0xBBB, user_c = @0xCCC
)]
fun test_poc(
c: &signer,
user_a: &signer,
user_b: &signer,
user_c: &signer
) {
test_setup(c, string::utf8(b"initvaloper1test"));
//gets the metadata for all tokens
let ulp_metadata = coin::metadata(@initia_std, string::utf8(b"ulp"));
let init_metadata = coin::metadata(@initia_std, string::utf8(b"uinit"));
let cabal_lp_metadata = cabal::get_cabal_token_metadata(1);
let x_init_metadata = cabal::get_xinit_metadata();
let sx_init_metadata = cabal::get_sxinit_metadata();
let initia_signer = &account::create_signer_for_test(@initia_std);
let ulp_decimals = 1_000_000; //ulp has 6 decimals
let deposit_amount_a = 100 * ulp_decimals; //the amount user a deposits
primary_fungible_store::transfer( //user a must first be funded
initia_signer,
ulp_metadata,
signer::address_of(user_a),
deposit_amount_a
);
utils::increase_block(1, 1);
cabal::mock_stake(user_a, 1, deposit_amount_a); //user a stakes 100 ulp
utils::increase_block(1, 1);
let deposit_amount_b = 50 * ulp_decimals; //the amount user b stakes
primary_fungible_store::transfer(
initia_signer,
ulp_metadata,
signer::address_of(user_b),
deposit_amount_b
);
utils::increase_block(1, 1);
cabal::mock_stake(user_b, 1, deposit_amount_b); //user b stakes 50 ulp
utils::increase_block(1, 1000);
cabal::mock_unstake(user_b, 1, deposit_amount_b); //user b unstakes 50 ulp this means the cabal tokens are now 100 and the underlying tokens 150
//This mock unstaking uses the pool balances instead of querying the validator because Cosmos is not supported during testing.
//However, this is not a problem, since the pools are only modified after the undelegation, not during the unstaking
utils::increase_block(1, 1000);
cabal::mock_unstake(user_a, 1, 50 * ulp_decimals); //user a unstakes half of his cabal lp tokens for which 50 ulp tokens should be unstaked but actually 75 are getting unstaked
}
You can also add debug::print(&unbonding_amount);
to line 1334 in cabal.move to verify that 75 ULP tokens are being unstaked instead of 50.
To run the POC, paste it into the file tests/core_staking_test.move
and run the command initiad move test -f test_poc
Medium Risk Findings (7)
[M-01] Reentrancy Check in lock_staking::reentry_check
Causes Concurrent INIT Deposit Failures (DOS)
Submitted by rare_one
Finding description and impact
The liquid staking protocol’s deposit_init_for_xinit
function, which allows users to deposit INIT tokens to receive xINIT, is vulnerable to transaction failures when multiple users deposit concurrently in the same block. The function withdraws INIT tokens and delegates them to a validator via pool_router::add_stake
, which triggers lock_staking::delegate
. This, in turn, invokes reentry_check
to prevent multiple delegations in the same block.
If a second user attempts to deposit in the same block as another, their transaction fails with error code 196618 (EREENTER), as reentry_check
detects that the StakingAccount was already modified in the current block. This vulnerability disrupts users’ ability to participate in the protocol, particularly during periods of high transaction activity.
Root Cause:
The reentry_check
function in lock_staking.move enforces a strict one-delegation-per-block rule for a given StakingAccount:
fun reentry_check(
staking_account: &mut StakingAccount,
with_update: bool
) {
let (height, _) = block::get_block_info();
assert!(staking_account.last_height != height, error::invalid_state(EREENTER));
if (with_update) {
staking_account.last_height = height;
};
}
This function checks if staking_account.last_height
equals the current block height and aborts with EREENTER if true. If with_update
is true, it updates last_height
to the current height, marking the block as processed.
In cabal.move, the deposit_init_for_xinit
function processes user deposits independently:
public entry fun deposit_init_for_xinit(account: &signer, deposit_amount: u64) acquires ModuleStore {
emergency::assert_no_paused();
assert!(deposit_amount > 0, error::invalid_argument(EINVALID_COIN_AMOUNT));
let m_store = borrow_global<ModuleStore>(@staking_addr);
let coin_metadata = coin::metadata(@initia_std, string::utf8(b"uinit"));
// calculate mint xinit
let init_amount = pool_router::get_real_total_stakes(coin_metadata);
let x_init_amount = option::extract(&mut fungible_asset::supply(m_store.x_init_metadata));
let mint_x_init_amount = if (x_init_amount == 0) {
deposit_amount
} else {
let ratio = bigdecimal::from_ratio_u64(deposit_amount, init_amount);
// Round up because of truncation
(bigdecimal::mul_by_u128_ceil(ratio, x_init_amount) as u64)
};
assert!(mint_x_init_amount > 0, error::invalid_argument(EINVALID_STAKE_AMOUNT));
// withdraw init to stake
let fa = primary_fungible_store::withdraw(
account,
coin_metadata,
deposit_amount
);
pool_router::add_stake(fa); // Triggers lock_staking::delegate
// mint xINIT to user
coin::mint_to(&m_store.x_init_caps.mint_cap, signer::address_of(account), mint_x_init_amount);
}
When multiple users call deposit_init_for_xinit
in the same block:
- The first user’s deposit passes
reentry_check
, updatesstaking_account.last_height
to the current block height (assumingwith_update = true
inlock_staking::delegate
), and completes, minting xINIT. - The second user’s deposit triggers
reentry_check
viapool_router::add_stake
andlock_staking::delegate
. Sincestaking_account.last_height
equals the current height, the transaction aborts with EREENTER, preventing the deposit and xINIT minting.
The function’s lack of coordination for concurrent deposits results in multiple lock_staking::delegate
calls, triggering the reentrancy failure. This vulnerability is evident in production scenarios where users deposit INIT during high network activity, such as during market events or protocol launches.
IMPACTS:
Denial-of-Service (DoS) for Users: Users attempting to deposit INIT in a block with multiple deposits will face transaction failures, losing gas fees and being unable to receive xINIT. This disrupts their ability to participate in liquid staking, particularly during peak usage periods.
Financial Loss: Failed transactions result in gas fee losses for users, which can accumulate significantly in high-traffic scenarios, deterring participation.
Recommended mitigation steps
Implement a batching mechanism to aggregate all user INIT deposits within a block and process them as a single delegation, ensuring only one call to lock_staking::delegate
per block and bypassing the reentry_check
restriction.
Proof of Concept
Initialize the protocol using initialize to set up the xINIT pool.
Simulate two users depositing INIT in the same block using mockdepositinitforxinit.
Observe the EREENTER error (code 196618) from reentry_check for the second deposit.
// User 1 transaction (submitted in block 100) public entry fun user1deposit(account: &signer) { depositinitforxinit(account, 500000000); }
// User 2 transaction (submitted in block 100) public entry fun user2deposit(account: &signer) { depositinitforxinit(account, 200000000); }
Setup:
Deploy the protocol and initialize it.
Fund User 1 (@0x1) with 500,000,000 INIT and User 2 (@0x2) with 200,000,000 INIT.
Set block height to 100.
User 1 submits user1deposit in block 100, calling `depositinitforxinit, withdrawing 500,000,000 INIT, delegating via
poolrouter::addstake(triggering
lock_staking::delegate`), and minting approximately 500,000,000 xINIT (adjusted for pool size).
User 2 submits user2deposit in block 100, calling `depositinitforxinit, but
poolrouter::addstaketriggers
lockstaking::delegateand
reentrycheck. Since
stakingaccount.lastheight` equals 100 (from User 1’s deposit), the transaction aborts with EREENTER (code 196618).
Result:
User 1: Receives ~500,000,000 xINIT.
User 2: Transaction fails, loses gas fees, receives no xINIT.
This test demonstrates the issue
fun test_concurrent_deposits(c: &signer, user_a: &signer, user_b: &signer) {
test_setup(c, string::utf8(b"initvaloper1test"));
let init_metadata = coin::metadata(@initia_std, string::utf8(b"uinit"));
let x_init_metadata = cabal::get_xinit_metadata();
// Transfer INIT to users
let deposit_a = 500_000_000;
let deposit_b = 200_000_000;
primary_fungible_store::transfer(c, init_metadata, signer::address_of(user_a), deposit_a);
primary_fungible_store::transfer(c, init_metadata, signer::address_of(user_b), deposit_b);
// Simulate concurrent deposits (no block increase between them)
cabal::mock_deposit_init_for_xinit(user_a, deposit_a);
cabal::mock_deposit_init_for_xinit(user_b, deposit_b);
utils::increase_block(1, 1);
// Verify xINIT balances
let user_a_xinit = primary_fungible_store::balance(signer::address_of(user_a), x_init_metadata);
let user_b_xinit = primary_fungible_store::balance(signer::address_of(user_b), x_init_metadata);
assert!(user_a_xinit == deposit_a || user_a_xinit == deposit_a - 1, 1007);
assert!(user_b_xinit == deposit_b || user_b_xinit == deposit_b - 1, 1008);
// Verify global state
let final_xinit_supply = cabal::get_xinit_total_supply();
let final_total_staked_init = cabal::get_pool_router_total_init();
assert!(final_xinit_supply == (MINIMUM_LIQUIDITY as u128) + (deposit_a as u128) + (deposit_b as u128), 1009);
assert!(final_total_staked_init == MINIMUM_LIQUIDITY + deposit_a + deposit_b, 1010);
}
and the result
Failures in 0xe472ba1c00b2ee2b007b4c5788839d6fb7371c6::core_staking_test:
┌── test_concurrent_deposits ──────
│ error[E11001]: test failure
│ ┌─ ././vip-contract/sources/lock_staking.move:1226:9
│ │
│ 1222 │ fun reentry_check(
│ │ ------------- In this function in 0xe55cc823efb411bed5eed25aca5277229a54c62ab3769005f86cc44bc0c0e5ab::lock_staking
│ ·
│ 1226 │ assert!(staking_account.last_height != height, error::invalid_state(EREENTER));
│ │ ^^^^^^ Test was not expected to error, but it aborted with code 196618 originating in the module e55cc823efb411bed5eed25aca5277229a54c62ab3769005f86cc44bc0c0e5ab::lock_staking rooted here
│
│
│ stack trace
│ lock_staking::delegate_internal(././vip-contract/sources/lock_staking.move:715)
│ lock_staking::delegate(././vip-contract/sources/lock_staking.move:256)
│ pool_router::mock_process_delegate_init(./sources/pool_router.move:608-614)
│ pool_router::mock_add_stake(./sources/pool_router.move:630)
│ cabal::mock_deposit_init_for_xinit(./sources/cabal.move:1196)
│ core_staking_test::test_concurrent_deposits(./tests/core_staking_test.move:780)
│
└──────────────────
Test result: FAILED. Total tests: 1; passed: 0; failed: 1
[M-02] Unstaking calculates user share at request time, ignoring slashing — leading to DoS and unfair distribution
Submitted by 0xAlix2, also found by adam-idarrha, givn, maxzuvex, and TheSchnilch
https://github.com/code-423n4/2025-04-cabal/blob/main/sources/cabal.move#L1075-L1080 https://github.com/code-423n4/2025-04-cabal/blob/main/sources/cabal.move#L1017-L1022
Finding Description and Impact
Users can stake both INIT and LP tokens into different validator pools by calling functions like deposit_init_for_xinit
or stake_asset
. To exit, users initiate an unstake via initiate_unstake
, which starts an unbonding period. After this delay, they can claim their tokens through claim_unbonded_assets
.
Behind the scenes, these staked assets are delegated to validators, and slashing may occur—meaning a portion of the delegated tokens could be penalized (burned). To stay accurate, the protocol uses pool_router::get_real_total_stakes
to track the current delegated amount. However, the current unstaking flow doesn’t properly account for slashing events that may occur during the unbonding period.
When a user initiates an unstake, either process_lp_unstake
or process_xinit_unstake
is called. For simplicity, we focus on process_lp_unstake
.
In process_lp_unstake
, the claimable amount is calculated up front at unstake time:
let reward_amount = compound_lp_pool_rewards(m_store, unstaking_type);
let lp_amount = reward_amount + pool_router::get_real_total_stakes(...);
let cabal_lp_amount = option::extract(...);
let ratio = bigdecimal::from_ratio_u128(unstake_amount as u128, cabal_lp_amount);
let unbonding_amount = bigdecimal::mul_by_u64_truncate(ratio, lp_amount);
...
vector::push_back(&mut cabal_store.unbonding_entries, UnbondingEntry {
...
amount: unbonding_amount,
...
});
Later, in claim_unbonded_assets
, this precomputed amount is blindly transferred to the user:
primary_fungible_store::transfer(
&package::get_assets_store_signer(),
metadata,
account_addr,
amount // ← Precomputed at unstake time
);
This design introduces a critical flaw: it assumes the pool value remains constant between unstake and claim, which is not guaranteed. If slashing happens during this period:
- A large user may claim more than the pool holds → DoS
- An early user may claim full value post-slash → Other users absorb full loss
NB: This differs from systems like Lido, where the amount returned is computed at claim time based on the user’s share of the pool, ensuring fair slashing distribution.
Recommended Mitigation Steps
Instead of locking in the claimable amount at unstake time, store the user’s percentage share of the total LP supply. Then, during claim_unbonded_assets
, recalculate the actual amount using the current pool value (i.e., post-slash).
This ensures slashing risk is shared proportionally among all stakers, and prevents DoS or overclaiming exploits.
Proof of Concept
Case 1 – Whale Unstakes 50%, Then Pool Is Slashed by 51%
Scenario:
- Total pool value: 1,000 LP tokens
- A whale holds 500 LP and unstakes it, expecting to claim 500 units
- The remaining users hold the other 500 LP
- Before the whale claims, the pool is slashed by 51%, reducing it to 490 units
Current behavior (problem):
- The whale still tries to claim 500 units
- The pool only has 490 units left → this would revert, fail, or break accounting
- Essentially, the whale locks in a pre-slash value and now the pool can’t fulfill it
What should happen:
- Claim should be recalculated at execution time
- 500 LP × (490 / 1000) = 245 units
- Whale gets 245 units, the rest of the pool reflects that slashing fairly across all holders
Case 2 – Early User Unstakes, Pool Slashed, Claims Full Amount
Scenario:
- Pool has 1,000 LP total
- User A holds 100 LP, unstakes and expects 100 units
- User B holds 900 LP
- A 50% slash hits before User A claims → pool is now worth 500 units
Current behavior (problem):
- User A claims 100 units (based on original rate)
- Only 400 units remain for User B’s 900 LP
- That means User B absorbs the full impact of the slash — clearly unfair
What should happen:
- Claim is based on current pool state
- 100 LP × (500 / 1000) = 50 units
- User A gets 50 units, User B’s 900 LP is worth 450 → everyone shares the slash proportionally
[M-03] Attacker Can Desynchronize Supply Snapshot During Same-Block Unstake, Reducing Everyone’s Rewards
Submitted by maxzuvex, also found by 0xAlix2 and TheSchnilch
Finding description and impact
An attacker holding Cabal LSTs (like sxINIT) can monitor the mempool for the manager’s voting_reward::snapshot()
transaction. By submitting his own cabal::initiate_unstake
transaction to execute in the same block (H
) as the manager’s snapshot, the attacker can use two flaws:
cabal_token::burn
(called by their unstake) doesn’t update the supply snapshot for blockH
, leaving the recorded supply artificially high (pre-burn).cabal_token::check_snapshot
skips recording the attacker’s own balance for blockH
. Later reward calculations use the stale high supply but retrieve the attacker’s now lower (post-burn) balance via fallback logic. This desynchronization causes the total calculated reward shares to be less than 100%, reducing the rewards paid out to all users for that cycle.
Attacker Exploit:
- Manager Snapshots Supply:
voting_reward::snapshot
triggerscabal_token::snapshot
, recording the LST total supply (S₀
) for blockH
. - User Unstakes (Same Block H): The user calls
cabal::initiate_unstake
. - Internally,
cabal_token::check_snapshot
is called but skips writing the user’s pre-burn balance for blockH
due to same-block logic. - The user’s live balance decreases.
cabal_token::burn
executes, reducing the live supply, but fails to update the recorded supply snapshot forH
(which remainsS₀
).- Reward Calculation Uses Inconsistent State: Later, rewards for cycle
H
are calculated: get_snapshot_supply(H)
returns the stale, pre-burnS₀
.get_snapshot_balance(user, H)
finds no user snapshot forH
and falls back, returning the user’s live, post-burn balance.- Result: The reward share calculation uses
post_burn_balance / pre_burn_supply
, causing the sum of all shares to be < 1, thus reducing payouts for everyone. An attacker triggers this by ensuring theirinitiate_unstake
executes in the same block as the manager’ssnapshot
(e.g., via mempool monitoring).
// 1. In `cabal_token::burn` (called by attacker's `initiate_unstake` in block H)
public fun burn(burn_cap: &BurnCapability, fa: FungibleAsset) acquires ManagingRefs, HolderStore, ModuleStore { // Added missing acquires for context
let metadata = burn_cap.metadata;
let metadata_addr = object::object_address(&metadata);
assert!(exists<ManagingRefs>(metadata_addr), EMANAGING_REFS_NOT_FOUND);
let refs = borrow_global<ManagingRefs>(metadata_addr);
// Burn reduces the LIVE supply
fungible_asset::burn(&refs.burn_ref, fa);
// --- VULNERABILITY PART 1 ---
// ATTACKER EXPLOIT: This function is called in block H AFTER cabal_token::snapshot recorded
// the supply. However, UNLIKE mint_to, this function DOES NOT check if it's the snapshot
// block and DOES NOT update the HolderStore::supply_snapshots table for block H.
// The recorded supply for H remains the stale, pre-burn value (S₀).
/* Missing logic similar to mint_to:
if (is_snapshot_block) {
update supply_snapshots table with new (lower) supply S₁;
}
*/
}
// 2. In `cabal_token::check_snapshot` (called during attacker's unstake in block H)
fun check_snapshot(c_balance: &mut CabalBalance, current_snapshot_block: u64, prev_snapshot_block: Option<u64>) {
let current_block_height = block::get_current_block_height(); // Is H
let snapshot_block = current_snapshot_block; // is H
// --- VULNERABILITY PART 2 ---
if (current_block_height == current_snapshot_block) { // TRUE (H == H)
// ATTACKER EXPLOIT: This condition is met.The logic inside prevents writing
// the attacker's PRE-BURN balance to their personal snapshot table for block H.
if (option::is_none(&prev_snapshot_block)) {
return; // Early return, no write for H
};
// Tries to write for Previous_H instead, still no write for H
snapshot_block = option::extract(&mut prev_snapshot_block);
};
// The code path that writes `table::add(&mut c_balance.snapshot, key, c_balance.balance)`
// requires `current_block_height > snapshot_block`, which is FALSE here.
// RESULT: Attacker's balance for H is NOT recorded.
}
// 3. In `cabal_token::get_snapshot_balance_internal` (called during reward calculation for block H)
fun get_snapshot_balance_internal(cabal_balance: &CabalBalance, block_height: u64): u64 { // block_height is H
// ... start_block check ...
// Search attacker's personal table for entry >= H
let key = table_key::encode_u64(block_height);
let iter = table::iter(&cabal_balance.snapshot, option::some(key), option::none(), 2);
// --- VULNERABILITY PART 3 ---
// Because the write was skipped (Vuln Part 2), no entry >= H is found for the attacker.
if (!table::prepare<vector<u8>, u64>(iter)) {
// ATTACKER EXPLOIT: Fallback logic returns the attacker's LIVE balance.
// At this point (reward calculation time), the live balance is the POST-BURN balance.
return cabal_balance.balance;
};
// This part is not reached for the attacker in this scenario
let (_, balance) = table::next(iter);
*balance
}
Impact:
- Invariant violation
The attack breaks the core guaranteeΣ balances_H = supply_H
. Because the attacker’s balance is recorded after the burn while the supply is recorded before, the numerator shrinks but the denominator stays high. - Universal reward loss
Reward shares now sum to < 1, so the bribe contract distributes fewer tokens than were deposited. Every honest staker at snapshot H loses part of their yield; the missing amount remains stranded in the pool. - Direct leverage for the attacker
An exiting holder gives up only their own one‑cycle reward while slashing everyone else’s payout by the same absolute amount. They can repeat the manoeuvre each epoch—or threaten to—creating a zero‑cost grief / extortion vector. - Compromise of a core protocol function
Fair, supply‑proportional bribe distribution is a primary feature of Cabal. Desynchronising balances and supply corrupts that mechanism, undermining trust in the staking programme. - Irreversible cycle corruption
Once the snapshot for block H is polluted, the mis‑distribution for that cycle is permanent. users cannot reclaim the lost bribes without an invasive state migration.
Recommended mitigation steps
- Add Supply Update to
burn
: Modifycabal_token::burn
to check if it’s executing in the same block as a snapshot. If so, update thesupply_snapshots
table for that block height with the new, lower supply after the burn, mirroring the logic incabal_token::mint_to
. - Fix
check_snapshot
: Ensurecheck_snapshot
always writes the user’s pre-interaction balance for the current snapshot blockH
when needed, removing the logic that skips this write during same-block interactions.
[M-04] Unstaking from LP pools will cause underflow and lock user funds
Submitted by givn, also found by 0xAlix2, bareli, and den-sosnowsky
Description
When users unstake their LP tokens they call initiate_unstake
for the required amount. This creates UnbondingEntry
and increases the pending unstake amount - unstaked_pending_amounts[unstaking_type] + unbonding_amount
.
At some point an admin (or user) will invoke batch_undelegate_pending_lps()
:
for (i in 0..vector::length(&m_store.unbond_period)) {
// undelegate
pool_router::unlock(m_store.stake_token_metadata[i], m_store.unstaked_pending_amounts[i]);
// clear pending
m_store.unstaked_pending_amounts[i] = 0;
};
The pool_router::unlock
calculates what % of every pool should be undelegated so that the desired LP token amount is reached. This happens by calculating a fraction, iterating over the pools and subtracting an amount equal to that fraction. The issue is that when the last pool element is reached, the remaining amount is all removed from there:
let temp_amount = if (i == vector::length(&pools) - 1) {
remain_amount
} else {
bigdecimal::mul_by_u64_truncate(ratio, temp_pool.amount)
};
remain_amount = remain_amount - temp_amount;
This means that if the last pool is empty or with insufficient funds an underflow will occur here:
temp_pool.amount = temp_pool.amount - temp_amount;
The protocol tries to always fund the pool with least staked tokens by using get_most_underutilized_pool
, but this does not prevent situations of imbalance, like:
- The most underutilized pool receives a very big deposit and dwarfs the rest
- New pool is being freshly added
-
Users in large numbers withdrawing their funds. Thus, the subtraction can still underflow in situations that are likely to happen over time.
Impact
- Staked LP tokens can’t be fully withdrawn from protocol.
- The amount of funds locked can vary greatly, depending on the stake/unstake & operation patterns.
-
Once undelegate amount has been requested it can’t be reduced to try to unlock a smaller amount and get the maximum funds possible. Delegations are locked until someone else deposits.
Root Cause
Trying to withdraw too much from pool when funds are located in other pools.
Proof of Concept
The following code replicates the undelegate calculations of
pool_router::unlock
and demonstrates that not all the funds can be withdrawn.
Place this test in pool_router.move
. Run it with yarn run test- test_unlock_lp_amounts
.
#[test, expected_failure()]
public fun test_unlock_lp_amounts() {
let unlock_amount = 2_000_000u64; // Unlock LP
let pools = vector[ // LP staked in each pool
20_000_000,
20_000_000,
10_000
];
let i = 20;
loop {
debug::print(&string::utf8(b"Begin undelegation round"));
pools = calculate_undelegates(pools, unlock_amount);
i = i - 1;
debug::print(&string::utf8(b""));
if(i == 0) {
break;
}
};
// Pool amounts after last iteration
// [debug] "New pool stake amounts"
// [debug] 4500
// [debug] 4500
// [debug] 0
// Now we continue undelegating smaller amounts, but action will underflow
debug::print(&string::utf8(b" ---- Undelegate smaller amount #1 ---- "));
pools = calculate_undelegates(pools, 1_000);
debug::print(&string::utf8(b" ---- Undelegate smaller amount #2 ---- "));
pools = calculate_undelegates(pools, 1_000);
}
/// Simplified version of pool_router::unlock_lp
#[test_only]
fun calculate_undelegates(pools: vector<u64>, unlock_amount: u64): vector<u64> {
let pools_length = vector::length(&pools);
let total_stakes = vector::fold(pools, 0u64, |acc, elem| acc + elem); // LP staked in across all pools
let remain_amount: u64 = unlock_amount;
let ratio = bigdecimal::from_ratio_u64(unlock_amount, total_stakes);
debug::print(&string::utf8(b"Total staked before undelegate"));
debug::print(&total_stakes);
assert!(total_stakes >= unlock_amount, 1000777);
for (i in 0..pools_length) {
let pool_stake = vector::borrow_mut(&mut pools, i);
let undelegate_amount = if (i == pools_length - 1) {
remain_amount
} else {
bigdecimal::mul_by_u64_truncate(ratio, *pool_stake)
};
remain_amount = remain_amount - undelegate_amount;
// Update state tracking
*pool_stake = *pool_stake - undelegate_amount;
};
debug::print(&string::utf8(b"New pool stake amounts"));
let total_staked_after_undelegate = vector::fold(pools, 0u64, |acc, elem| {
debug::print(&elem);
acc + elem
});
debug::print(&string::utf8(b"Total staked after undelegate"));
debug::print(&total_staked_after_undelegate);
pools
}
Recommended mitigation steps
Instead of doing one iteration over the pools and subtracting the remaining amount from the last one, use an loop and modulo arithmetic to iterate multiple times and subtract any possible remaining amounts from the other pools.
Separate undelegate amount calculation from the stargate
calls so that multiple MsgUndelegate
messages are not sent for the same validator.
[M-05] Last Holder Can’t Exit, Zero‑Supply Unstake Reverts
Submitted by maxzuvex
Finding description and impact
When a user burns the entire remaining supply of a Cabal LST ( sxINIT or Cabal LPT) via initiate_unstake
, the follow‑up processing step always aborts with a divide‑by‑zero and the user can never exit.
- User calls
initiate_unstake(stake_type, S)
– S equals the whole supply. unstake_xinit
/unstake_lp
queuesprocess_*_unstake
withcosmos::move_execute( … "process_xinit_unstake" | "process_lp_unstake" … )
for next transaction.- After queuing,
initiate_unstake
burns the LST:cabal_token::burn(S)
⇒ live supply becomes 0. - Transaction 1 finishes and state now shows
supply = 0
,pending[i] = S
. - Later, Transaction 2 executes
process_*_unstake
. - Calls
compound_*_pool_rewards
(does not change LST supply). - Reads the current LST supply:
sx_supply = fungible_asset::supply(meta)
⇒ 0. - Calculates
ratio = bigdecimal::from_ratio_u128(unstake_amount, sx_supply)
which triggersassert!(denominator != 0)
→EDIVISION_BY_ZERO
abort.
Because the burn happened in a prior committed transaction, every retry of process_*_unstake
gets the same supply == 0
state and fails again, so the user’s INIT / LP is permanently locked and it makes a DoS for the final staker of that pool.
// Simplified logic from process_xinit_unstake
entry fun process_xinit_unstake(account: &signer, staker_addr: address, unstaking_type: u64, unstake_amount: u64) acquires ModuleStore, CabalStore, LockExempt {
// ... permission checks, reward compounding ...
let m_store = borrow_global_mut<ModuleStore>(@staking_addr);
let x_init_amount = m_store.staked_amounts[unstaking_type];
// --- VULNERABILITY ---
// 'unstake_amount' is the original amount burned (== total supply in this case).
// 'sx_init_amount' reads the supply *after* the burn in initiate_unstake, so it's 0.
let sx_init_amount = option::extract(&mut fungible_asset::supply(m_store.cabal_stake_token_metadata[unstaking_type])); // Returns 0
// This attempts bigdecimal::from_ratio_u128(S, 0) --> Division by Zero!
let ratio = bigdecimal::from_ratio_u128(unstake_amount as u128, sx_init_amount);
// Transaction reverts here.
// ... rest of function is unreachable ...
}
Impact:
If an address burns the last sxINIT / LPT in circulation, every call to process_*_unstake
reverts with EDIVISION_BY_ZERO
, so no UnbondingEntry
is recorded and the underlying INIT / LP can never be claimed. The final staker’s funds are permanently locked and causes a pool‑level denial of service.
Recommended mitigation steps
In process_xinit_unstake
and process_lp_unstake
:
let pool_before = m_store.staked_amounts[pool];
let supply = fungible_asset::supply(meta);
let unbond = if supply == 0 {
// last holder – give them the entire pool
pool_before
} else {
let r = bigdecimal::from_ratio_u128(unstake_amount, supply);
bigdecimal::mul_by_u64_truncate(r, pool_before)
};
- Guard against
supply == 0
. - If it’s the final unstake, transfer the whole remaining pool; otherwise keep the original ratio logic.
Proof of Concept
// Assume pool index 1 is an LP‑staking pool
let pool_idx: u64 = 1;
// ── step 1: mint exactly 1 Cabal‑LPT to Alice ───────────────────────────
let mint_cap = &ModuleStore.cabal_stake_token_caps[pool_idx].mint_cap;
cabal_token::mint_to(mint_cap, @alice, 1); // total supply = 1
// ── step 2: Alice initiates unstake of the ENTIRE supply ────────────────
cabal::initiate_unstake(&signer(@alice), pool_idx, 1);
/*
* inside initiate_unstake:
* • cabal_token::burn(1) → total supply becomes 0
* • schedules process_lp_unstake() (async)
*/
// ── step 3: worker executes queued call ──────────────────────────────────
cabal::process_lp_unstake(&assets_signer, @alice, pool_idx, 1);
/*
* inside process_lp_unstake:
*
* let sx_supply = fungible_asset::supply(lp_metadata); // == 0
* let ratio = bigdecimal::from_ratio_u128(1, sx_supply);
* └────── divide‑by‑zero → abort
*
* transaction reverts with EZeroDenominator
*/
[M-06] LP Redelegation Uses Inaccurate Internal Tracker Amount, Leading to Potential Failures or Orphaned Funds
Submitted by edoscoba
Summary
The redelegate_lp
function, called during validator changes for LP pools, uses the internal pool.amount
tracker to specify the amount for MsgBeginRedelegate
. This tracker can diverge from the actual staked amount due to unreflected rewards or slashing, potentially causing redelegation failures or leaving funds staked with the old validator.
Finding Description
The pool_router::change_validator
function allows the deployer (@staking_addr
) to migrate staked assets managed by a specific StakePool
object from one validator to another. For LP token pools, it calls the internal helper function redelegate_lp
located in pool_router.move#L327-L339.
The redelegate_lp
function constructs a MsgBeginRedelegate
message to be sent via cosmos::stargate
. The amount of tokens to be redelegated in this message is taken directly from the pool.amount
field of the StakePool
resource:
fun redelegate_lp(pool: &StakePool, new_validator_address: String) {
let denom = coin::metadata_to_denom(pool.metadata);
let coin = Coin { denom, amount: pool.amount }; // <<< Uses pool.amount
let msg = MsgBeginRedelegate {
// ... other fields ...
amount: vector[coin] // <<< Amount specified in the message
};
cosmos::stargate(&object::generate_signer_for_extending(&pool.ref), marshal(&msg));
}
However, the pool.amount
is merely an internal counter updated by pool_router::add_stake
and pool_router::unstake
and pool_router::unlock_lp
. It does not automatically reflect changes in the actual staked balance within the underlying mstaking
module due to:
- Accrued Rewards: Rewards earned by the staked LP tokens increase the actual delegation shares/amount but are not reflected in
pool.amount
untilcompound_lp_pool_rewards
runs (triggered by user actions) and subsequently callsadd_stake
. - Slashing: If the validator is slashed, the actual delegation shares/amount decreases, but
pool.amount
is never updated to reflect this loss.
Therefore, pool.amount
can easily drift from the true staked amount. Sending a MsgBeginRedelegate
with this potentially inaccurate amount breaks the expectation that the administrative function correctly manages the entirety of the funds associated with the StakePool
object.
Impact
Using an inaccurate amount in MsgBeginRedelegate
leads to two primary negative outcomes:
- Redelegation Failure:If
pool.amount
is greater than the actual staked amount (e.g., due to slashing), the underlyingmstaking
module will reject the request, causing thecosmos::stargate
call and the entirechange_validator
transaction to abort. This prevents the deployer from migrating funds away from a potentially slashed or undesirable validator. - Partial Redelegation / Orphaned Funds: If
pool.amount
is less than the actual staked amount (e.g., due to accrued rewards not yet reflected), themstaking
module will likely succeed in redelegating only the specifiedpool.amount
. The remaining tokens (the difference) will be left staked with the original validator. However, thechange_validator
function proceeds to updatepool.validator
to the new address. This creates an inconsistent state where theStakePool
object points to the new validator, but some funds remain with the old one, potentially becoming difficult to track, manage, or withdraw through the router’s standard logic.
Likelihood
The likelihood of pool.amount
becoming inaccurate is High. Staking rewards are expected to accrue over time. If users don’t frequently stake or unstake from a specific LP pool, the compound_lp_pool_rewards
function won’t run often, causing pool.amount
to lag behind the actual staked amount (actual > tracker). Slashing events, while less frequent, would cause the tracker to exceed the actual amount.
Therefore, drift between pool.amount
and the real staked value is highly likely. The likelihood of this drift causing a problem during a change_validator
call is Medium, as it depends on when the deployer chooses to execute this administrative action relative to the drift.
Recommended mitigation steps
Modify the redelegate_lp
function to query the actual delegation amount from the underlying mstaking
module before constructing the MsgBeginRedelegate
message. This can be done using a query_stargate
call similar to the one used in get_lp_real_stakes
. Use this queried, accurate amount instead of pool.amount
.
Apply the following conceptual change (exact query path and response parsing might need adjustment based on Initia’s mstaking
module specifics) to pool_router.move#L327-L339:
fun redelegate_lp(pool: &StakePool, new_validator_address: String) {
let denom = coin::metadata_to_denom(pool.metadata);
- let coin = Coin { denom, amount: pool.amount };
+ let pool_addr = object::address_from_extend_ref(&pool.ref);
+ // Query the actual staked amount instead of relying on the internal tracker
+ let path = b"/initia.mstaking.v1.Query/Delegation"; // Adjust path if needed
+ let request = DelegationRequest { validator_addr: pool.validator, delegator_addr: address::to_sdk(pool_addr) };
+ let response_bytes = query_stargate(path, marshal(&request));
+ // Note: Need robust parsing and error handling for the query response here.
+ // Assuming successful query and parsing to get the actual_staked_amount:
+ let actual_staked_amount = parse_delegation_response_amount(response_bytes, denom); // Placeholder for parsing logic
+ assert!(actual_staked_amount > 0, error::invalid_state(0)); // Add appropriate error code
+
+ let coin = Coin { denom, amount: actual_staked_amount }; // Use the queried amount
let msg = MsgBeginRedelegate {
_type_: string::utf8(b"/initia.mstaking.v1.MsgBeginRedelegate"),
delegator_address: to_sdk(object::address_from_extend_ref(&pool.ref)),
(Note: The parse_delegation_response_amount
function is illustrative; the actual implementation would involve using unmarshal
and navigating the DelegationResponse
struct as done in get_lp_real_stakes
to extract the correct amount for the given denom.)
Proof of Concept
- Setup: Configure an LP pool using
add_pool
. Stake some LP tokens viacabal::stake_asset
(which callspool_router::add_stake
), settingpool.amount
to, say, 1,000,000. - Scenario 1 (Rewards Accrued): Assume rewards accrue in the underlying
mstaking
module, increasing the actual staked amount to 1,050,000, but no user actions trigger compounding, sopool.amount
remains 1,000,000. - Action: The deployer calls
change_validator
for this pool.redelegate_lp
is called. - Execution:
redelegate_lp
constructsMsgBeginRedelegate
withamount = 1,000,000
. - Outcome: The
mstaking
module successfully redelegates 1,000,000 tokens. 50,000 tokens remain staked with the old validator.change_validator
updatespool.validator
to the new address. The 50,000 tokens are now potentially orphaned from the router’s perspective. - Scenario 2 (Slashing Occurred): Assume the validator was slashed, reducing the actual staked amount to 950,000, but
pool.amount
remains 1,000,000. - Action: The deployer calls
change_validator
.redelegate_lp
is called. - Execution:
reredelegate_lp
constructsMsgBeginRedelegate
withamount = 1,000,000
. - Outcome: The
mstaking
module rejects the request because only 950,000 tokens are available. Thecosmos::stargate
call fails, causing thechange_validator
transaction to abort. The validator cannot be changed.
[M-07] Desynchronization of Cabal’s internal accounting with actual staked INIT amounts leads to over-minting of sxINIT tokens
Submitted by ChainSentry, also found by Afriauditor, givn, and maze
Summary
The Cabal Protocol’s implementation of compound_xinit_pool_rewards
fails to synchronize the protocol’s internal accounting (m_store.staked_amounts
) with the actual amount of INIT tokens staked in the underlying Initia staking system. This creates a vulnerability where external events like slashing penalties or validator-initiated actions that reduce the staked amount are not reflected in Cabal’s internal state. The reward compounding function simply adds claimed rewards to its internal tracking variable without verifying that this matches reality, creating a divergence between what Cabal thinks is staked and what actually is staked. When slashing occurs, users who stake xINIT will receive more sxINIT than they should based on the actual backing ratio. This leads to economic dilution of all sxINIT holders.
This issue is particularly concerning because it compounds over time - each slashing event that goes unaccounted for widens the gap between reported and actual values, eventually leading to significant economic damage for the protocol and its users.
Technical Explanation
The core issue lies in the compound_xinit_pool_rewards
function in cabal.move
, which is responsible for claiming staking rewards and updating the protocol’s internal state:
fun compound_xinit_pool_rewards(m_store: &mut ModuleStore, pool_index: u64) {
let coin_metadata = coin::metadata(@initia_std, string::utf8(b"uinit"));
let reward_fa = pool_router::withdraw_rewards(coin_metadata);
let reward_amount = fungible_asset::amount(&reward_fa);
if (reward_amount > 0) {
// calculate fee amount
let fee_ratio = bigdecimal::from_ratio_u64(m_store.xinit_stake_reward_fee_bps, BPS_BASE);
let fee_amount = bigdecimal::mul_by_u64_truncate(fee_ratio, reward_amount);
let fee_fa = fungible_asset::extract(&mut reward_fa, fee_amount);
let rewards_remaining = reward_amount - fee_amount;
primary_fungible_store::deposit(package::get_commission_fee_store_address(), fee_fa);
m_store.stake_reward_amounts[pool_index] = m_store.stake_reward_amounts[pool_index] + rewards_remaining;
pool_router::add_stake(reward_fa);
// mint xINIT to pool
m_store.staked_amounts[pool_index] = m_store.staked_amounts[pool_index] + rewards_remaining;
coin::mint_to(&m_store.x_init_caps.mint_cap, package::get_assets_store_address(), rewards_remaining);
} else {
fungible_asset::destroy_zero(reward_fa);
}
}
The issue occurs because this function:
- Claims rewards from the staking system via
pool_router::withdraw_rewards
- Processes these rewards and restakes them via
pool_router::add_stake
- Updates
m_store.staked_amounts[pool_index]
by simply adding the rewards amount - Never verifies that this updated value matches the actual staked amount in the underlying system
However, the protocol has a function pool_router::get_real_total_stakes
that does query the actual staked amount from the Initia staking system:
// From pool_router.move
pub fun get_real_total_stakes(metadata: Object<Metadata>): u64 {
// Sum up all stake amounts from the underlying staking system
let total_stakes: u64 = 0;
/* ... */
let pools = *simple_map::borrow(&router.token_pool_map, &metadata);
for (i in 0..vector::length(&pools)) {
let amount = if (metadata == utils::get_init_metadata()) {
get_init_real_stakes(&pools[i])
} else {
get_lp_real_stakes(&pools[i])
};
total_stakes = total_stakes + amount;
};
total_stakes
}
This function is never called during reward compounding, leading to the desynchronization.
The following scenario demonstrates how this vulnerability can lead to over-minting of sxINIT tokens:
-
Initial state:
- 1,000,000,000 INIT staked in the Initia staking system
m_store.staked_amounts[0]
= 1,000,000,000- Total sxINIT supply = 1,000,000,000
-
A slashing event occurs in the Initia staking system, reducing the staked INIT by 5%:
- Actual staked INIT = 950,000,000
m_store.staked_amounts[0]
still = 1,000,000,000 (unchanged)
-
Rewards of 50,000,000 INIT are claimed via
compound_xinit_pool_rewards
:- Function adds 50,000,000 to
m_store.staked_amounts[0]
, making it 1,050,000,000 - Actual staked INIT after adding rewards = 1,000,000,000 (950,000,000 + 50,000,000)
- Function adds 50,000,000 to
-
User comes to stake 100,000,000 xINIT:
- According to Cabal’s accounting: Exchange rate = 1,050,000,000 INIT / 1,000,000,000 sxINIT = 1.05
- User should receive: 100,000,000 / 1.05 = 95,238,095 sxINIT
- But the actual exchange rate should be: 1,000,000,000 INIT / 1,000,000,000 sxINIT = 1.0
- User should actually receive: 100,000,000 / 1.0 = 100,000,000 sxINIT
-
The discrepancy:
- User receives 95,238,095 sxINIT
- These tokens are backed by only 90,702,948 INIT (95,238,095 * 1,000,000,000 / 1,050,000,000)
- This means the user has been short-changed by 4,761,905 INIT worth of backing
The issue becomes even more severe with multiple slashing events and/or larger stake amounts.
Impact
The impact of this vulnerability is significant and affects multiple areas:
- Violation of Core Protocol Invariants: The fundamental invariant
1 xINIT ≈ 1 INIT
is broken. This undermines the entire economic model of the protocol as described in the documentation. - Economic Dilution: When new users stake xINIT and receive sxINIT based on incorrect exchange rates, they get fewer tokens than they should. This effectively transfers value from new users to existing sxINIT holders.
-
Systemic Risk: Each uncorrected slashing event compounds the problem. Over time, the divergence between tracked and actual amounts could become severe, potentially leading to:
- Loss of user confidence in the protocol
- Inability to properly value sxINIT tokens
- Difficulty in integrating with other DeFi protocols due to unreliable pricing
- Unbonding Issues: When users try to unstake their sxINIT tokens, they might not receive the expected amount of xINIT back, leading to unexpected losses.
This issue affects all users of the Cabal Protocol, with the severity increasing over time as more slashing events occur without correction.
Recommended mitigation steps
Sync with Reality: Modify the compound_xinit_pool_rewards
function to query the actual staked amounts after claiming rewards.
Disclosures
C4 is an open organization governed by participants in the community.
C4 audits incentivize the discovery of exploits, vulnerabilities, and bugs in smart contracts. Security researchers are rewarded at an increasing rate for finding higher-risk issues. Audit submissions are judged by a knowledgeable security researcher and disclosed to sponsoring developers. C4 does not conduct formal verification regarding the provided code but instead provides final verification.
C4 does not provide any guarantee or warranty regarding the security of this project. All smart contract software should be used at the sole risk and responsibility of users.