LayerZero

LayerZero: Starknet Endpoint
Findings & Analysis Report

2026-01-29

Table of contents

Overview

About C4

Code4rena (C4) is a competitive audit platform where security researchers, referred to as Wardens, review, audit, and analyze codebases for security vulnerabilities in exchange for bounties provided by sponsoring projects.

During the audit outlined in this document, C4 conducted an analysis of the LayerZero: Starknet Endpoint smart contract system. The audit took place from October 24 to November 07, 2025.

Final report assembled by Code4rena.

Summary

The C4 analysis yielded an aggregated total of 0 HIGH or MEDIUM severity vulnerabilities.

Additionally, C4 analysis included 53 reports detailing issues with a risk rating of LOW severity or non-critical.

The issues presented here are linked back to their original finding, which may include relevant context from the judge and LayerZero team.

Scope

The code under review can be found within the C4 LayerZero: Starknet Endpoint repository, and is composed of 46 files written in the Cairo programming language.

The code in C4’s LayerZero: Starknet Endpoint repository was pulled from:

Severity Criteria

C4 assesses the severity of disclosed vulnerabilities based on three primary risk categories: high, medium, and low/non-critical.

High-level considerations for vulnerabilities span the following key areas when conducting assessments:

  • Malicious Input Handling
  • Escalation of privileges
  • Arithmetic
  • Gas use

For more information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website, specifically our section on Severity Categorization.

Low Risk and Non-Critical Issues

For this audit, 53 reports were submitted by wardens detailing low risk and non-critical issues. The report highlighted below by jerry0422 received the top score from the judge.

The following wardens also submitted reports: 0x1982us, 0xbrett8571, 0xFBI, 0xIconart, 0xki, 0xshdax, 0xSmartContract, Afriauditor, Almanax, amirhossineedalat, Astroboy, Bale, Brene, chupinexx, cosin3, dee24, dmdg321, dray, Ephraim, eta, francoHacker, galer_ah, glorbo, heavyw8t, inh3l, jaykosai, johnyfwesh, K42, kestyvickky, KineticsOfWeb3, LeoGold, LeopoldFlint, lioblaze, Meks079, mohamedfahmy, NexusAudits, niffylord, oakcobalt, Petrus, rare_one, ryzen_xp, Sathish9098, Sparrow, Tigerfrake, TOSHI, v2110, valarislife, willycode20, winnerz, Wojack, Zenmagnum, and zubyoz.

[L-01] Worker Fee Multiplier Lacks Upper Bound Validation Allowing Excessive Fee Configuration

  • DVN set_dst_config (no validation): dvn.cairo #L158-L165
  • Executor set_dst_config (no validation): executor.cairo #L207-L214
  • DstConfig struct definitions: structs.cairo #L26-L32 and structs.cairo #L7-L18

The set_dst_config function in both DVN and Executor contracts accepts a multiplier_bps parameter without validating its upper bound. The multiplier_bps field is defined as u16, allowing values up to 65,535 (655.35%), while the protocol’s basis point denominator (BPS_DENOMINATOR) is 10,000 (100%).

In dvn.cairo:

fn set_dst_config(ref self: ContractState, params: Array<SetDstConfigParams>) {
    self.worker_base._assert_only_admin();
    
    let dst_config_set = params.span();
    for param in params.into_iter() {
        self.dst_configs.write(param.dst_eid, param.config);  // No validation
    }
    
    self.emit(DstConfigSet { dst_config_set });
}

In executor.cairo:

fn set_dst_config(ref self: ContractState, params: Array<SetDstConfigParams>) {
    self.worker_base._assert_only_admin();
    
    let dst_config_set = params.span();
    for param in params.into_iter() {
        self.dst_configs.write(param.dst_eid, param.config);  // No validation
    }
    
    self.emit(DstConfigSet { dst_config_set });
}

This contrasts with the OApp fee component, which properly validates fee basis points:

// In fee.cairo
fn _assert_valid_fee_bps(fee_bps: u128) {
    assert_with_byte_array(
        fee_bps <= BPS_DENOMINATOR.try_into().unwrap(),
        err_invalid_bps(fee_bps)
    );
}

Impact

  • Creates inconsistency between OApp fee validation (capped at 100%) and worker fee validation (uncapped)
  • While users receive fee quotes before sending messages and can reject excessive fees, lack of validation enables misconfiguration
Expand for detailed mitigation steps

Add validation to ensure multiplier_bps does not exceed reasonable bounds. Consider implementing a maximum multiplier cap (e.g., 200% = 20,000 bps for premium services):

For DVN (dvn.cairo):

fn set_dst_config(ref self: ContractState, params: Array<SetDstConfigParams>) {
    self.worker_base._assert_only_admin();
    
    let dst_config_set = params.span();
    for param in params.into_iter() {
        // Validate multiplier_bps
        let max_multiplier_bps: u16 = 20000; // 200% maximum
        assert(
            param.config.multiplier_bps <= max_multiplier_bps,
            'Invalid multiplier_bps'
        );
        
        self.dst_configs.write(param.dst_eid, param.config);
    }
    
    self.emit(DstConfigSet { dst_config_set });
}

Apply the same validation to executor.cairo. Alternatively, enforce strict 100% cap for consistency with OApp fees if no premium is intended:

assert(
    param.config.multiplier_bps <= BPS_DENOMINATOR.try_into().unwrap(),
    'Multiplier exceeds 100%'
);

[L-02] Over-strict fee validation and allowance-sweeping refunds cause denial of service

The EndpointV2::send() function contains two critical flaws in its fee validation and refund mechanism that cause denial of service for users with legitimate token balances.

Issue 1: Over-strict Balance Validation

In _assert_messaging_fee(), the validation requires that the sender’s balance must be greater than or equal to their entire allowance, not just the required fee:

let has_required_native_fee = supplied_native_fee_allowance >= required_native_fee
    && supplied_native_balance >= supplied_native_fee_allowance;  // Bug: checks balance >= allowance

This is overly restrictive because:

  • Users only need balance >= required_fee to successfully pay the fee
  • The check incorrectly requires balance >= allowance
  • Users commonly give large or unlimited allowances (standard ERC20 practice)

Example:

  • User approves 1000 tokens to Endpoint
  • User has 200 tokens in balance
  • Required fee is 100 tokens
  • Transaction reverts at validation: 200 >= 1000 fails ❌
  • Should succeed since: 200 >= 100 ✓

Issue 2: Allowance-Sweeping Refund Causes DoS

In _refund_native() and _refund_zro(), the contract attempts to refund the entire excess allowance rather than just the excess amount actually transferred:

fn _refund_native(/* ... */) {
    if allowance > fee {
        let success = native_token.transfer_from(sender, refund_address, allowance - fee);  // Bug: tries to transfer more than balance
        assert_with_byte_array(success, err_native_transfer_failed());
    }
}

Impact

Even if Issue 1 were fixed, the refund would still fail because:

  • If user has balance < allowance (very common scenario)
  • Refund tries to transfer allowance - fee tokens
  • Transfer fails because sender doesn’t have enough balance
  • Entire send() transaction reverts

Example:

  • User approves 1000 tokens
  • User has 200 tokens
  • Fee is 100 tokens (paid successfully)
  • Refund attempts: transfer_from(sender, refund_address, 900)
  • User only has 100 tokens remaining
  • Transfer fails, entire transaction reverts ❌

Root Cause

The implementation incorrectly assumes that users will only approve amounts equal to or less than their balance. This contradicts standard ERC20 usage patterns where:

  1. Users give large/unlimited approvals for convenience
  2. Users may spend tokens after giving approval
  3. Only the actual transfer amount matters, not the approval amount

Impact

Denial of Service on Core Protocol Functionality:

  • Users with sufficient balance to pay fees cannot send cross-chain messages
  • Affects any user who has given a standard large allowance
  • No workaround except manually reducing allowance to match exact balance before each transaction
  • Breaks the protocol’s primary function (cross-chain messaging)

This is a common and realistic scenario, not an edge case.

Expand for detailed mitigation steps

Fix 1: Validate only required fee against balance

fn _assert_messaging_fee(
    required_native_fee: u256,
    supplied_native_fee_allowance: u256,
    supplied_native_balance: u256,
    required_zro_token_fee: u256,
    supplied_zro_fee_allowance: u256,
    supplied_zro_balance: u256,
) {
-   let has_required_native_fee = supplied_native_fee_allowance >= required_native_fee
-       && supplied_native_balance >= supplied_native_fee_allowance;
+   let has_required_native_fee = supplied_native_fee_allowance >= required_native_fee
+       && supplied_native_balance >= required_native_fee;
    
-   let has_required_zro_token_fee = supplied_zro_fee_allowance >= required_zro_token_fee
-       && supplied_zro_balance >= supplied_zro_fee_allowance;
+   let has_required_zro_token_fee = supplied_zro_fee_allowance >= required_zro_token_fee
+       && supplied_zro_balance >= required_zro_token_fee;

    assert_with_byte_array(/* ... */);
}

Fix 2: Only refund actual excess transferred

fn _refund_native(
    self: @ContractState,
    native_token: IERC20Dispatcher,
    allowance: u256,
    fee: u256,
    sender: ContractAddress,
    refund_address: ContractAddress,
) {
    if allowance > fee {
-       let success = native_token.transfer_from(sender, refund_address, allowance - fee);
+       let balance = native_token.balance_of(sender);
+       let refund_amount = core::cmp::min(allowance - fee, balance);
+       if refund_amount > 0 {
+           let success = native_token.transfer_from(sender, refund_address, refund_amount);
+           assert_with_byte_array(success, err_native_transfer_failed());
+       }
-       assert_with_byte_array(success, err_native_transfer_failed());
    }
}

Apply the same fix to _refund_zro().

Alternatively, restructure to only pull the exact fee amount needed via transferFrom(), eliminating the need for refunds entirely.

[L-03] Nilified Messages Can Be Re-Committed, Allowing Execution of Previously Invalidated Messages

The commit() function in endpoint_v2.cairo fails to prevent re-commitment of nilified messages, allowing previously invalidated messages to be restored to an executable state. This completely defeats the purpose of the nilify() function, which is designed to permanently prevent message execution.

The vulnerability stems from how _committable() determines whether a message can be committed:

fn _committable(
    self: @ContractState,
    origin: Origin,
    receiver: ContractAddress,
    lazy_inbound_nonce: u64,
) -> bool {
    origin.nonce > lazy_inbound_nonce
        || self
            .messaging_channel
            ._has_payload_hash(receiver, origin.src_eid, origin.sender, origin.nonce)
}

The function returns true if _has_payload_hash() returns true, which checks:

fn _has_payload_hash(...) -> bool {
    self._inbound_payload_hash(receiver, src_eid, sender, nonce) != EMPTY_PAYLOAD_HASH
}

When a message is nilified, its payload hash is set to NIL_PAYLOAD_HASH (max u256). Since NIL_PAYLOAD_HASH != EMPTY_PAYLOAD_HASH, the _has_payload_hash() check returns true, making the message committable again.

The commit() function then overwrites the NIL_PAYLOAD_HASH without any validation:

// Ensure that the path is verifiable
assert_with_byte_array(
    self._committable(origin.clone(), receiver, lazy_nonce), err_path_not_committable(),
);

// Ensure that the payload hash is valid
assert_with_byte_array(payload_hash != EMPTY_PAYLOAD_HASH, err_invalid_payload_hash());

// Store the payload hash for this inbound message
self
    .messaging_channel
    ._inbound_payload_hash_entry(receiver, origin.src_eid, origin.sender, origin.nonce)
    .write(payload_hash);  // ← Overwrites NIL_PAYLOAD_HASH

Attack scenario:

  1. A malicious message (nonce 1) is committed with payload_hash_A
  2. The receiver discovers the malicious intent and calls nilify() to prevent execution
  3. The message library re-commits the same message with the original payload_hash_A (or even a different payload hash)
  4. The nilified message is now executable again, bypassing the security measure

Impact: This vulnerability allows bypassing the nilify() security mechanism, enabling execution of messages that were intentionally invalidated by the receiver for security reasons.

Expand for detailed mitigation steps

Add a check in the commit() function to prevent overwriting nilified messages:

fn commit(
    ref self: ContractState,
    origin: Origin,
    receiver: ContractAddress,
    payload_hash: Bytes32,
) {
    self.reentrancy_guard.start();

    // Assert that the caller is the receive library for this path
    self._assert_only_receive_library(receiver, origin.src_eid);

    // Get the lazy inbound nonce for this path
    let lazy_nonce = self
        .messaging_channel
        .lazy_inbound_nonce(receiver, origin.src_eid, origin.sender);

    // Ensure that the path is initializable
    assert_with_byte_array(
        self._initializable(origin.clone(), receiver, lazy_nonce),
        err_path_not_initializable(),
    );

    // Ensure that the path is verifiable
    assert_with_byte_array(
        self._committable(origin.clone(), receiver, lazy_nonce), err_path_not_committable(),
    );

    // Ensure that the payload hash is valid
    assert_with_byte_array(payload_hash != EMPTY_PAYLOAD_HASH, err_invalid_payload_hash());

+   // Ensure that the message has not been nilified
+   let current_payload_hash = self
+       .messaging_channel
+       .inbound_payload_hash(receiver, origin.src_eid, origin.sender, origin.nonce);
+   assert_with_byte_array(
+       current_payload_hash != NIL_PAYLOAD_HASH,
+       err_message_nilified(),
+   );

    // Store the payload hash for this inbound message
    self
        .messaging_channel
        ._inbound_payload_hash_entry(receiver, origin.src_eid, origin.sender, origin.nonce)
        .write(payload_hash);

    self.emit(PacketCommitted { origin, receiver, payload_hash });

    self.reentrancy_guard.end();
}

Alternatively, modify _committable() to exclude nilified messages:

fn _committable(
    self: @ContractState,
    origin: Origin,
    receiver: ContractAddress,
    lazy_inbound_nonce: u64,
) -> bool {
    if origin.nonce > lazy_inbound_nonce {
        return true;
    }
    
    let payload_hash = self
        .messaging_channel
        ._inbound_payload_hash(receiver, origin.src_eid, origin.sender, origin.nonce);
    
    payload_hash != EMPTY_PAYLOAD_HASH && payload_hash != NIL_PAYLOAD_HASH
}

[L-04] Treasury cap ineffective - expands proportionally with worker fees, allowing unbounded treasury charges

The _apply_treasury_fee_cap function is designed to cap the treasury fee charged to users. However, the implementation allows the cap to expand proportionally with worker fees, defeating the purpose of having a fixed maximum.

Root Cause

In ultra_light_node_302.cairo, the cap calculation uses:

fn _apply_treasury_fee_cap(
    self: @ContractState, native_fee: u256, treasury_fee: u256, pay_in_lz_token: bool,
) -> u256 {
    if pay_in_lz_token {
        return treasury_fee;
    }

    // maxNativeFee = max (_totalNativeFee, treasuryNativeFeeCap)
    let treasury_native_fee_cap = self.treasury_native_fee_cap.read();

    min(treasury_fee, max(native_fee, treasury_native_fee_cap))
}

Where:

  • native_fee = total worker fees (DVN + Executor fees) from L247-248
  • treasury_fee = native_fee * basis_points / 10000 calculated by the treasury contract L88
  • treasury_native_fee_cap = configured cap value

The formula min(treasury_fee, max(native_fee, treasury_native_fee_cap)) creates a dynamic ceiling:

  • When native_fee > treasury_native_fee_cap: effective cap = native_fee
  • When native_fee ≤ treasury_native_fee_cap: effective cap = treasury_native_fee_cap

Impact

The treasury can charge fees far exceeding the intended cap when worker fees are high.

Example:

  • Configured treasury_native_fee_cap = 100 ETH
  • Worker fees (native_fee) = 10,000 ETH
  • Treasury basis_points = 1000 (10%)
  • Treasury calculates: treasury_fee = 10,000 × 0.10 = 1,000 ETH
  • Cap applied: min(1,000, max(10,000, 100)) = min(1,000, 10,000) = 1,000 ETH

Result: Treasury charges 1,000 ETH despite the cap being set to 100 ETH.

This allows the treasury to extract excessive fees proportional to worker costs, creating an unfair burden on users during high-fee periods and potentially enabling economic attacks by inflating worker fees.

Expand for detailed mitigation steps

Replace the dynamic cap formula with a true fixed maximum:

fn _apply_treasury_fee_cap(
    self: @ContractState, native_fee: u256, treasury_fee: u256, pay_in_lz_token: bool,
) -> u256 {
    if pay_in_lz_token {
        return treasury_fee;
    }

-   // we must prevent high-treasuryFee Dos attack
-   // nativeFee = min(treasureFeeQuote, maxNativeFee)
-   // opportunistically raise the maxNativeFee to be the same as _totalNativeFee
-   // can't use the _totalNativeFee alone because the oapp can use custom workers to force
-   // the fee to 0.
-   // maxNativeFee = max (_totalNativeFee, treasuryNativeFeeCap)
    let treasury_native_fee_cap = self.treasury_native_fee_cap.read();

-   min(treasury_fee, max(native_fee, treasury_native_fee_cap))
+   min(treasury_fee, treasury_native_fee_cap)
}

This ensures the treasury fee never exceeds treasury_native_fee_cap regardless of worker fees, providing true protection against excessive treasury charges.

[L-05] Receive-library grace period breaks on default ↔ custom switches, risking in-flight message drops

The is_valid_receive_library function incorrectly validates libraries during grace periods when an OApp switches between default and custom receive libraries. The function determines which timeout storage to check based on the current library configuration, but this logic fails when the library type changes during a transition.

Root Cause:

In is_valid_receive_library (lines 346-372), the timeout lookup logic uses the current library type to decide which timeout storage to check:

let timeout = if is_default {
    // OApp is using default library, check default timeout configuration
    self.default_receive_library_timeout.entry(src_eid).read()
} else {
    // OApp has custom library, check OApp-specific timeout configuration
    self.receive_library_timeout.entry(oapp).entry(src_eid).read()
};

This creates two critical failure scenarios:

Scenario 1: Default → Custom Library Switch

  1. OApp uses the default library (e.g., LibraryA)
  2. Protocol upgrades the default library to LibraryB with a grace period

    • Sets default_receive_library_timeout[eid] = Timeout { lib: LibraryA, expiry: block + 1000 }
  3. OApp switches to custom LibraryC

    • Sets receive_library[oapp][eid] = LibraryC
    • Clears receive_library_timeout[oapp][eid] (no grace period allowed per line 289-294)
  4. When validating in-flight messages using LibraryA:

    • is_default = false (OApp now uses custom library)
    • Checks receive_library_timeout[oapp][eid] → finds DEFAULT_TIMEOUT (empty)
    • Never checks default_receive_library_timeout[eid] where LibraryA’s grace period exists
    • LibraryA is incorrectly rejected despite having a valid protocol-level grace period

Scenario 2: Custom → Default Library Switch

  1. OApp uses custom LibraryA
  2. OApp switches to DEFAULT_LIB (cannot set grace period due to restriction at lines 289-294)

    • receive_library_timeout[oapp][eid] is cleared
  3. When validating in-flight messages using LibraryA:

    • is_default = true (OApp now uses default)
    • Checks default_receive_library_timeout[eid] (doesn’t contain LibraryA)
    • Never checks the now-cleared OApp-specific timeout
    • LibraryA is rejected with no grace period

Impact

  • Message loss: In-flight messages are rejected even when grace periods were properly set
  • Fund loss: Failed message delivery can result in locked or lost funds in cross-chain transfers
  • Protocol violation: The documented grace period mechanism is completely bypassed
  • User disruption: OApps cannot safely switch between default and custom libraries without risking message failures

The restriction at lines 289-294 that prevents grace periods when DEFAULT_LIB is involved exacerbates this issue by making Scenario 2 always result in zero grace period.

Expand for detailed mitigation steps

The is_valid_receive_library function should check both timeout storage locations to validate if a library has a grace period, regardless of the current library type:

fn is_valid_receive_library(
    self: @ComponentState<TContractState>,
    oapp: ContractAddress,
    src_eid: u32,
    lib: ContractAddress,
) -> bool {
    // First check if the lib is the currently configured receive library
    let GetLibraryResponse {
        lib: expected_lib, is_default,
    } = self.get_receive_library(oapp, src_eid);
    if lib == expected_lib {
        return true;
    }

    // Check OApp-specific timeout
    let oapp_timeout = self.receive_library_timeout.entry(oapp).entry(src_eid).read();
    if oapp_timeout.lib == lib && oapp_timeout.expiry > get_block_number() {
        return true;
    }

    // Check default library timeout
    let default_timeout = self.default_receive_library_timeout.entry(src_eid).read();
    if default_timeout.lib == lib && default_timeout.expiry > get_block_number() {
        return true;
    }

    false
}

Additionally, consider preserving grace periods when switching between default and custom libraries, or at minimum, clear the appropriate timeout storage in both locations to prevent orphaned grace periods.

[L-06] ULN302 verifiable() conflates distinct failure states with “Verified”, breaking off-chain relayer logic

The verifiable() function in ULN302 returns VerificationState::Verified when _endpoint_verifiable() returns false, which occurs in multiple distinct scenarios:

Lines 382-384:

// check endpoint verifiable
if !self._endpoint_verifiable(origin, receiver, payload_hash) {
    return VerificationState::Verified;
}

The _endpoint_verifiable() function returns false in the following cases (lines 764-786):

  1. Empty payload hash (payload_hash == EMPTY_PAYLOAD_HASH)

    • Invalid input
  2. Not committable (committable_with_receive_lib == false)

    • Which happens when:

      • Nonce ≤ lazy_inbound_nonce AND no payload hash exists (stale/old message)
      • Invalid receive library configuration
  3. Already verified (inbound_payload_hash == payload_hash) - Legitimately completed

All three scenarios return the same Verified state, making it impossible for off-chain systems to distinguish between:

  • ✅ “Successfully verified and committed”
  • ❌ “Invalid payload hash”
  • ❌ “Configuration error (invalid receive library)”
  • ⏭️ “Stale message (old nonce)”

Impact

Off-chain relayers, DVNs, and monitoring systems rely on verifiable() to determine message state and decide whether to:

  • Submit verifications to DVNs
  • Retry failed messages
  • Display status in monitoring dashboards
  • Trigger alerts for stuck messages

The ambiguous Verified state causes:

  1. Operational inefficiency: Relayers cannot distinguish between “done” and “failed,” leading to unnecessary retry attempts on invalid messages
  2. Poor error handling: Systems cannot provide meaningful error messages or alerts when messages fail due to configuration issues
  3. Brittle integration logic: Off-chain systems must implement workarounds by querying multiple additional view functions (committable_with_receive_lib, inbound_payload_hash) to disambiguate state
  4. Silent failures: Configuration errors (invalid receive library) return Verified instead of a distinct error state, masking problems

This is particularly problematic because LayerZero is a cross-chain messaging protocol where reliable off-chain infrastructure is critical to protocol operation.

Expand for detailed mitigation steps

Option 1: Add more granular states Extend the VerificationState enum to distinguish between different non-verifiable conditions:

pub enum VerificationState {
    Verifying,
    Verifiable,
    Verified,           // Actually committed
    NotInitializable,
    InvalidPayload,     // Empty or invalid payload hash
    NotCommittable,     // Stale nonce or other committability issues
    InvalidConfig,      // Invalid receive library
}

Option 2: Return composite information

Change the function to return more detailed state information:

#[derive(Drop, Serde, PartialEq, Debug)]
pub struct VerificationStatus {
    pub state: VerificationState,
    pub reason: ByteArray,  // Human-readable reason for the state
}

This provides clear semantics for off-chain systems to make informed decisions about message handling.


Disclosures

C4 audits incentivize the discovery of exploits, vulnerabilities, and bugs in smart contracts. Security researchers are rewarded at an increasing rate for finding higher-risk issues. Audit submissions are judged by a knowledgeable security researcher and disclosed to sponsoring developers. C4 does not conduct formal verification regarding the provided code but instead provides final verification.

C4 does not provide any guarantee or warranty regarding the security of this project. All smart contract software should be used at the sole risk and responsibility of users.