Swafe

Swafe
Findings & Analysis Report

2026-05-04

Table of contents

Overview

About C4

Code4rena (C4) is a competitive audit platform where security researchers, referred to as Wardens, review, audit, and analyze codebases for security vulnerabilities in exchange for bounties provided by sponsoring projects.

During the audit outlined in this document, C4 conducted an analysis of the Swafe smart contract system. The audit took place from November 18 to December 09, 2025.

Following the C4 audit, 3 wardens (montecristo, niffylord, DCENT09) reviewed the mitigations of 7 Medium, 14 Lows, and 2 QA items; the mitigation review report is appended below the audit report.

Final report assembled by Code4rena.

Summary

The C4 analysis yielded an aggregated total of 7 unique vulnerabilities. Of these vulnerabilities, 7 received a risk rating in the category of MEDIUM severity.

Additionally, C4 analysis included 35 QA reports compiling issues with a risk rating of LOW severity or informational.

All of the issues presented here are linked back to their original finding, which may include relevant context from the judge and Swafe team.

Considering the number of issues identified, it is statistically likely that there are more complex bugs still present that could not be identified given the time-boxed nature of this engagement. It is recommended that a follow-up audit and development of a more complex stateful test suite be undertaken prior to continuing to deploy significant monetary capital to production.

Scope

The code under review can be found within the C4 Swafe repository, and is composed of a Rust Library with a single Partisia contract and includes 7,128 lines of Rust code.

The code in C4’s Swafe repository was pulled from:

Severity Criteria

C4 assesses the severity of disclosed vulnerabilities based on three primary risk categories: high, medium, and low/informational.

High-level considerations for vulnerabilities span the following key areas when conducting assessments:

  • Malicious Input Handling
  • Escalation of privileges
  • Arithmetic
  • Gas use

For more information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website, specifically our section on Severity Categorization.

Medium Risk Findings (7)

[M-01] Guardian share replay overwrite causes persistent recovery DoS (missing session binding)

Submitted by czarcas7ic, also found by 0xsolisec, Ahmerdrarerh, ameng, Bala1796, bunnyhunter, codertjay, Dest1ny_rs, eloujoe, Evo, felconsec, Guilherme, HalfBloodPrince, happykilling, hecker_trieu_tien, hodlturk, IndominusFortune, JuggerNaut63, Khan2018, kmkm, MakeIChop, niffylord, nstatoshi, odeili, pashap9990, psyone, Rhaydden, richa, ScientificKatie420, swordfish, Vivekz, and Yifan

  • contracts/src/http/endpoints/reconstruction/upload_share.rs #L52-L67
  • lib/src/backup/v0.rs #L342-L354

This finding concerns the guardian share upload endpoint used during social recovery, implemented in contracts/src/http/endpoints/reconstruction/upload_share.rs.

The handler currently accepts any valid GuardianShare for a given (account_id, backup_id) and blindly overwrites any existing share for that guardian index:

pub fn handler(
    mut ctx: OffChainContext,
    state: ContractState,
    request: HttpRequestData,
    _params: Params,
) -> Result<HttpResponseData, ContractError> {
    let request: Request = deserialize_request_body(&request)?;
    // ...
    // The share id will be in the range [0, |shares|)
    let share_id = backup
        .verify(&request.share.0)
        .map_err(|_| ServerError::InvalidParameter("Invalid guardian share".to_string()))?;

    // Update the share mapping for this backup
    // usually, the share will not already exist in this map:
    // we allow overwriting in case of a buggy client library and to
    // simplify a client which fails during the upload process: it can simply retry all uploads.
    //
    // Potentially different multiple versions of the same share are all equivalent.
    // Hence no replay protection is required here.
    let storage_key = (account_id, backup_id);
    let mut shares = GuardianShareCollection::load(&mut ctx, storage_key).unwrap_or_default();
    shares.insert(share_id, request.share.0);  // overwrites existing share
    GuardianShareCollection::store(&mut ctx, storage_key, shares);

}

GitHub permalink: contracts/src/http/endpoints/reconstruction/upload_share.rs#L33-L74

The in-code comment that “potentially different multiple versions of the same share are all equivalent” is correct at the level of the underlying Shamir share, but it no longer holds for recovery sessions once shares are encrypted to per-session recovery_pke keys—ciphertexts from different sessions are not interchangeable for the current recovery.

Root cause

At a cryptographic level, a GuardianShare is:

  • A Shamir share for the backup, plus
  • A signature and ciphertext that bind the share to a specific recovery public key (recovery_pke):
// GuardianShare construction for recovery
impl DecryptedShareV0 {
    pub fn send_for_recovery<R: Rng + CryptoRng>(
        &self,
        rng: &mut R,
        owner: &AccountState,
    ) -> Result<GuardianShare, SwafeError> {
        let recovery_pke =
            match owner {
                AccountState::V0(state) => state.rec.pke.as_ref().ok_or_else(|| {
                    SwafeError::InvalidOperation("Recovery not started".to_string())
                })?,
            };
        let ct = recovery_pke.encrypt(rng, &self.share.share, &EmptyInfo);
        let sig = self.share.sk.sign(
            rng,
            &SignedEncryptedShare {
                ct: &ct,
                idx: self.idx,
            },
        );
        Ok(GuardianShare::V0(GuardianShareV0 {
            ct,
            idx: self.idx,
            sig,
        }))
    }
}

GitHub permalink: lib/src/backup/v0.rs#L154-L179send_for_recovery

Recovery is session-bound:

  • Each recovery initiation generates a fresh asymmetric keypair used only for that recovery attempt:
impl AccountStateV0 {
    /// Initiate recovery using the RIK from offchain nodes
    pub fn initiate_recovery<R: Rng + CryptoRng>(
        &self,
        rng: &mut R,
        acc: AccountId,
        rik: &RecoveryInitiationKey,
    ) -> Result<(AccountUpdate, RecoverySecrets)> {

        // generate new keys for this recovery session
        let dkey = pke::DecryptionKey::gen(rng);

        // sign the recovery request with the signing key from RIK
        let sig = encap.key_sig.sign(
            rng,
            &RecoveryRequestMessage {
                account_id: acc,
                recovery_pke: dkey.encryption_key(),
            },
        );

        Ok((
            update,
            RecoverySecrets {
                acc,
                rec: self.rec.clone(),
                msk_ss_rik: *encap.msk_ss_rik.as_bytes(),
                dkey,
            },
        ))
    }
}

GitHub permalink: lib/src/account/v0.rs#L165-L226AccountStateV0::initiate_recovery

  • The final reconstruction uses RecoverySecrets.dkey to decrypt the guardian shares:
impl RecoverySecrets {
    /// Complete recovery using guardian shares
    pub fn complete(&self, shares: &[GuardianShare]) -> Result<MasterSecretKey> {
        // recover the social secret share from the backup
        let msk_ss_social: MskSecretShareSocial = match &self.rec.social {
            BackupCiphertext::V0(v0) => {
                v0.recover(&self.dkey, &self.msk_ss_rik, &EmptyInfo, shares)?
            }
        };

    }
}

GitHub permalink: lib/src/account/v0.rs#L140-L162RecoverySecrets::complete

During this recover call, each GuardianShare is:

  • Verified and decrypted, and
  • Discarded if decryption or commitment verification fails:
impl BackupCiphertextV0 {
    pub fn recover<M: Tagged + DeserializeOwned, A: Tagged>(
        &self,
        dke: &pke::DecryptionKey,
        sym: &sym::Key,
        aad: &A,
        shares: &[GuardianShare],
    ) -> Result<M, SwafeError> {
        // Verify and decrypt each share
        // Ignore invalid and duplicate shares
        let shares: Vec<(u32, Share)> = shares
            .iter()
            .filter_map(|share| {
                let GuardianShare::V0(share_v0) = share;
                let id = self.verify(share_v0).ok()?;
                let share: Share = dke.decrypt(&share_v0.ct, aad).ok()?;
                if self.comms[id as usize].hash == hash(&ShareHash { share: &share }) {
                    Some((id, share))
                } else {
                    None
                }
            })
            .collect::<BTreeMap<u32, Share>>()
            .into_iter()
            .collect();

        if shares.len() < meta.threshold as usize {
            return Err(SwafeError::InsufficientShares);
        }

    }
}

GitHub permalink: lib/src/backup/v0.rs#L289-L340BackupCiphertextV0::recover

Crucially, the on-chain storage for guardian shares does not encode which recovery session they belong to. The key used in the contract is only (account_id, backup_id, share_id). Any previously valid GuardianShare for that backup and guardian index will pass backup.verify, even if it was encrypted under a different, older recovery_pke from an earlier recovery attempt.

The combination of:

  1. Session-bound decryption (RecoverySecrets.dkey, per recovery attempt), and
  2. Session-agnostic storage (shares.insert(share_id, share) keyed only by backup + index),

means that stale shares from old recovery sessions remain verifiable but undecryptable in new recovery sessions.

Attack scenario and impact (Medium – Denial of Service)

Assuming an attacker can obtain previously uploaded guardian shares (e.g. via the unauthenticated /reconstruction/get-shares HTTP endpoint in contracts/src/http/endpoints/reconstruction/get_shares.rs, or through recorded network traffic/logs), the attacker can mount the following DoS:

Note: Even if the get-shares endpoint is restricted, any party that previously obtained guardian ciphertexts (e.g., by recording uploads) can replay them to overwrite fresh shares and deny recovery. A malicious guardian can already cause DoS by withholding; the replay/overwrite issue is impactful because a non‑guardian can block recovery despite all guardians cooperating.

  1. Recovery attempt 1 (Session 1)

    • User initiates recovery; a fresh recovery_pke₁ is published on-chain via AccountStateV0::initiate_recovery.
    • Guardians generate GuardianShares using send_for_recovery, encrypting shares under recovery_pke₁, and upload them via /reconstruction/upload-share.
    • Shares are stored under key (account_id, backup_id, share_id).
  2. Attacker collects session-1 shares

    • Because guardian shares are directly retrievable by any caller via /reconstruction/get-shares, an attacker can download the encoded GuardianShares for (account_id, backup_id).
  3. Recovery attempt 2 (Session 2)

    • Session 1 fails or is abandoned. The user initiates another recovery attempt.
    • AccountStateV0::initiate_recovery generates a fresh dkey₂ and publishes recovery_pke₂.
    • Guardians correctly compute new GuardianShares encrypted under recovery_pke₂ and upload them via /reconstruction/upload-share.
  4. Replay overwrite by attacker

    • Attacker replays the old session-1 GuardianShares by calling /reconstruction/upload-share with those stale ciphertexts.
    • The contract validates them with backup.verify (they are still cryptographically valid for the backup and index) and overwrites the fresh session-2 shares:

    [ \text{storedshare}[share\id] \gets \text{oldsharectunder }recoverypke_1 ]

  5. Recovery fails (DoS)

    • When the user runs RecoverySecrets::complete(&shares) for session 2:
    • Decryption of stale shares under dkey₂ fails.
    • These shares are silently filtered out in BackupCiphertextV0::recover.
    • If enough shares have been overwritten, the number of successfully decrypted shares falls below the threshold, causing SwafeError::InsufficientShares.
    • The attacker can repeat this process indefinitely, repeatedly overwriting any new session-2 shares with old session-1 ciphertexts, effectively blocking the user from ever completing recovery for that backup, even though guardians cooperate.

This is a persistent Denial of Service against the recovery mechanism:

  • It does not enable key theft or account takeover (the attacker never learns the underlying Shamir shares or MSK).
  • It does allow an unauthenticated attacker (not just a guardian) to grief the user and permanently prevent successful recovery, as long as the attacker can continuously replay old shares.
  • Impact: blocks a critical user function (recovery of the master secret key for the account). For a user who has lost their keys, the inability to recover is effectively a loss of funds, but this is treated as Medium rather than High because it requires a prior failed recovery and active griefing.
  • Likelihood: realistic given public readability of shares and a simple replay pattern.
  • Not High, because there is no direct loss of funds or unauthorized control transfer, but it does materially affect user safety and recoverability.

There are several complementary mitigation strategies; the most robust involves explicitly binding shares to recovery sessions.

  • Bind guardian shares to a specific recovery session (in the swafe_lib crate)

    In the swafe_lib crate, include a session identifier or epoch in the signed payload for guardian shares (e.g., augment the signed message such as SignedEncryptedShare).

    • Update the signing/types in swafe_lib (e.g., in lib/src/backup/v0.rs) to include the session_id (or a hash of the current recovery_pke) in the signed data.
    • Modify swafe_lib::backup::v0::BackupCiphertextV0::verify (or add a verify_for_session(...)) to enforce that a share’s session binding matches the active recovery session; old-session shares must not verify for a new session.
    • In upload_share, verify that the share’s session_id matches the current recovery session for that account.
    • If the session ID does not match, reject the upload.
    • This allows keeping the storage key as (account_id, backup_id, share_id) while ensuring that only shares for the active session are accepted. Since any valid share for the current session is functionally equivalent, overwrites within the same session are harmless (idempotent), but overwrites from old sessions are blocked.
  • Prevent or strictly control overwrites

    At minimum, make guardian shares append-only per session:

    • Reject an upload_share request if a share for (account_id, backup_id, session_id, share_id) already exists (where session_id is taken from the signed share metadata), unless the new payload is bitwise identical to the existing one.
    • Alternatively, allow overwrites only when accompanied by a nonce/timestamp and a guardian signature over (account_id, backup_id, session_id, share, timestamp), and store the latest timestamp; this makes replay of stale ciphertexts detectable and ignorable.
  • Optionally: restrict public readability or scope of get_shares

    The DoS described here heavily relies on an attacker being able to obtain old guardian shares:

    • Restrict /reconstruction/get-shares to authorized callers (e.g., require proof of control over the current recovery_pke or an authenticated session).
    • Or, at least, ensure it only returns shares from the latest session id and ignores/stops serving historical shares.

Implementing session-bound share storage and validation is the key mitigation: it ensures that old, but still cryptographically valid, guardian shares from previous recovery attempts cannot be replayed to corrupt the state of a new recovery session, thereby eliminating the observed DoS vector.

View detailed Proof of Concept

Swafe mitigated:

Bind guardian shares to recovery sessions via SessionId, preventing replay of shares across sessions

Status: Mitigation confirmed. Full details in reports from montecristo, niffylord, and DCENT09.


[M-02] Marking a backup makes recovery impossible (recover list never queried)

Submitted by gwumex, also found by 0xchamel, 0xnija, 0xpetern, 0xvd, Agontuk, arunabha003, boodieboodieboo, codertjay, CoMMaNDO, Cryptor, ETHworker, Evo, felconsec, fromeo_016, Garen, happykilling, hellnia, honey-k12, ht111111, kind0dev, oxwhite, pashap9990, Psycharis, shieldrey, slvDev, The_Amazing_One, touristS, and zubyoz

AccountSecrets::mark_recovery moves the chosen backup from the backups vector into the separate recover queue (lib/src/account/v0.rs#L516-L523). The Partisia endpoint /reconstruction/upload-share looks up a backup by calling account.recover_id, which, in turn, iterates AccountState::recover_backups. Because recover_backups mistakenly returns self.backups instead of the recover list (lib/src/account/v0.rs#L236-L247), any backup that has been marked for recovery disappears from what the contract can see. As a result, every guardian share upload for that backup fails with “Backup not found” (contracts/src/http/endpoints/reconstruction/upload_share.rs#L48-L50), and the user can never collect enough shares to reconstruct the secret, violating the recovery/liveness invariant.

Change AccountStateV0::recover_backups to iterate the recover queue, or search both backups and recover depending on the intended semantics. Add regression tests that (1) create a backup, (2) call mark_recovery, and (3) verify that recover_id still returns the ciphertext so /reconstruction/upload-share accepts guardian shares. Optionally enforce that only backups present in recover are accepted to ensure the owner explicitly enabled recovery.

View detailed Proof of Concept

Swafe mitigated:

self.backups.iter().collect()self.recover.iter().chain(once(&self.rec.social)).collect()

Status: Mitigation confirmed. Full details in reports from montecristo, niffylord, and DCENT09.


[M-03] Majority consensus threshold not enforced for even number of nodes

Submitted by Tigerfrake, also found by 0xozovehe, AlexNik777, arunabha003, blockace, bunnyhunter, cpsec, Evo, jerry0422, johnyfwesh, Khan2018, legendweb3, Meks079, odeili, pashap9990, SanketKogekar, securehash1, ShadowBytes, Silvermist, tradingview, XOMA, ZanyBonzy, zubyoz, and zzebra83

  • lib/src/association/v0.rs #L473
  • lib/src/association/v0.rs #L556

The AssociationV0::reconstruct_rik_data() and AssociationV0::reconstruct_recovery_key() functions uses div_ceil(2) for majority threshold calculation, which creates a vulnerability when dealing with even numbers of nodes.

    // Do a threshold vote on the fixed fields (same logic as reconstruct_msk)
    let mut votes = HashMap::new();
    for (_, record) in &v0_records {
        *votes.entry(record.fixed.clone()).or_insert(0) += 1;
    }

    //@audit-issue Less for even number of records
>>  let majority_threshold = v0_records.len().div_ceil(2);
    let majority_fixed = votes
        .into_iter()
        .find(|(_, count)| *count >= majority_threshold)
        .map(|(fixed, _)| fixed)
        .ok_or_else(|| {
            SwafeError::InvalidInput(
                "No majority consensus on fixed fields among MSK records".to_string(),
            )
        })?;

This allows a minority of nodes (exactly 50%) to control the reconstruction process instead of requiring a true majority (50% + 1). This undermines the core security guarantees of the threshold cryptography system. e.g:

// Flawed threshold calculation:
let majority_threshold = v0_records.len().div_ceil(2);

// For odd numbers:
// - 3 nodes: threshold = 2 (correct)

// For even numbers:
// - 4 nodes → threshold = 2 (50%, not majority)
    // FIX: Use proper majority threshold (more than 50%)
-   let majority_threshold = v0_records.len().div_ceil(2);
+   let majority_threshold = (v0_records.len() / 2) + 1;

View detailed Proof of Concept

Swafe mitigated:

  • Add strict_majority(n) helper returning (n / 2) + 1
  • Replace div_ceil(2) in reconstruct_rik_data and reconstruct_recovery_key

Status: Mitigation confirmed. Full details in reports from montecristo, niffylord, and DCENT09.


[M-04] Replayable recovery requests allow attacker to permanently block account recovery

Submitted by Rhaydden, also found by 0xanony, 0xAsen, 0xnija, 0xpetern, Agontuk, akupemulaygbaik, Alradyin, aman234, arkheionx, boodieboodieboo, bunnyhunter, clems4ever, cosin3, count-sum, czarcas7ic, eloujoe, Guilherme, holtzzx, kind0dev, Legend, maxim371, montecristo, niffylord, oxwhite, REHEroadchick, RotiTelur, SanketKogekar, th3_hybrid, tradingview, vangrim, Yu4n, Ziusz, zubyoz, and zzebra83

lib/src/account/v0.rs #L118-L127

The account recovery protocol lets a user who holds a RecoveryInitiationKey initiate recovery by posting a Recovery-type AccountUpdate to the contract. The core issue is that these recovery requests are authenticated but not bound to any notion of freshness or account version. As a result, any previously valid recovery request can be replayed at any time in the future, even after a newer recovery request has been made, and the contract will accept it and overwrite the newer request.

The recovery request that is signed by the RIK-derived key is RecoveryRequestMessage:

#[derive(Serialize)]
#[cfg_attr(test, derive(Clone))]
pub(crate) struct RecoveryRequestMessage {
    pub(crate) account_id: AccountId,
    pub(crate) recovery_pke: pke::EncryptionKey,
}

impl Tagged for RecoveryRequestMessage {
    const SEPARATOR: &'static str = "v0:recovery-request";
}

This message only includes the account_id and the new recovery_pke. It doesnt include the current account version (cnt), any nonce, or timestamp.

On the client side, AccountStateV0::initiate_recovery signs this message using the RIK based signing key and constructs an AccountUpdate of type Recovery:

pub fn initiate_recovery<R: Rng + CryptoRng>(
    &self,
    rng: &mut R,
    acc: AccountId,
    rik: &RecoveryInitiationKey,
) -> Result<(AccountUpdate, RecoverySecrets)> {
    // decrypt AssociationsV0 using RIK, then:
    let dkey = pke::DecryptionKey::gen(rng);

    let sig = encap.key_sig.sign(
        rng,
        &RecoveryRequestMessage {
            account_id: acc,
            recovery_pke: dkey.encryption_key(),
        },
    );

    let update = AccountUpdate::V0(AccountUpdateV0 {
        acc,
        msg: AccountMessageV0::Recovery(AccountUpdateRecoveryV0 {
            pke: dkey.encryption_key(),
            sig,
        }),
    });

    // ...
    Ok((update, RecoverySecrets { /* ... */ }))
}

On the verification side, AccountUpdateV0::verify_update processes Recovery messages as follows:

pub(super) fn verify_update(self, old: &AccountStateV0) -> Result<AccountStateV0> {
    match self.msg {
        AccountMessageV0::Update(auth) => {
            let st = auth.state;
            // version must increase by exactly one
            if Some(st.cnt) != old.cnt.checked_add(1) {
                return Err(SwafeError::InvalidAccountStateVersion);
            }
            old.sig.verify(&auth.sig, &st)?;
            Ok(st)
        }
        AccountMessageV0::Recovery(recovery) => {
            let mut new_state = old.clone();

            {
                let rec = &mut new_state.rec;
                let recovery_msg = RecoveryRequestMessage {
                    account_id: self.acc,
                    recovery_pke: recovery.pke.clone(),
                };

                let mut verified = false;
                for assoc in &rec.assoc {
                    if assoc.sig.verify(&recovery.sig, &recovery_msg).is_ok() {
                        verified = true;
                        break;
                    }
                }

                if !verified {
                    return Err(SwafeError::InvalidSignature);
                }

                // Set the recovery PKE to indicate recovery has been initiated
                rec.pke = Some(recovery.pke);
            }
            Ok(new_state)
        }
    }
}

For normal Update messages, the code strictly enforces st.cnt == old.cnt + 1. For Recovery messages, there’s no version check at all. The function simply clones the old state, validates the signature against the stored recovery associations, and overwrites rec.pke with the pke from the message. The account version cnt remains unchanged.

The contract action in contracts/src/lib.rs then blindly trusts the result of update.verify:

#[action]
fn update_account(
    _ctx: ContractContext,
    mut state: ContractState,
    update_str: String,
) -> ContractState {
    // deserialize the account update from a string,
    let update: AccountUpdate =
        encode::deserialize_str(update_str.as_str()).expect("Failed to decode account update");

    // retrieve the *claimed* account ID
    let account_id = update.unsafe_account_id();

    // retrieve the old account state
    let st_old: Option<AccountState> = state
        .accounts
        .get(account_id.as_ref())
        .map(|bytes| encode::deserialize(&bytes).expect("failed to deserialize account state"));

    // verify the update using the lib
    let st_new = update
        .verify(st_old.as_ref())
        .expect("Failed to verify account update");

    // store the updated account state
    state.set_account(account_id, st_new);
    state
}

There is no additional replay protection. Any Recovery update that verifies under one of the existing associations is accepted, regardless of how old it is.

Finally, guardians drive the actual recovery off the on-chain rec.pke value:

pub fn check_for_recovery<R: Rng + CryptoRng>(
    &self,
    rng: &mut R,
    acc: AccountId,
    state: &AccountState,
) -> Result<Option<GuardianShare>> {
    let AccountState::V0(requester_state_v0) = state;

    // check if recovery has been initiated
    let rec_st = &requester_state_v0.rec;
    if rec_st.pke.is_none() {
        return Ok(None); // Recovery not initiated yet
    }

    // decrypt our share
    let guardian_secrets = self.clone();
    let secret_share = guardian_secrets
        .decrypt_share_recovery(acc, &rec_st.social)
        .ok_or_else(|| {
            SwafeError::InvalidOperation(
                "Guardian not authorized for this recovery or failed to decrypt share".to_string(),
            )
        })?;

    // reencrypt the share for the requester's recovery PKE key
    Ok(Some(secret_share.send_for_recovery(rng, state)?))
}

Guardians always use the current rec.pke to encrypt shares. If an attacker can cause the on-chain rec.pke to revert to an older key by replaying an old Recovery update, guardians will encrypt to that old key instead of the latest one the user intended.

This gives an attacker a reliable way to deny liveness of the recovery process. A realistic scenario looks like this:

  • User loses their primary device and relies on the email + guardians-based recovery path.
  • User initiates Recovery session A with some ephemeral public key PKE_A. The transaction either fails, is abandoned, or just gets observed by an adversary.
  • Later, user initiates Recovery session B with a fresh key PKE_B.
  • Every time B is attempted, the adversary replays the old A update. Since there is no version or nonce binding, the contract happily accepts the old message and sets rec.pke back to PKE_A.
  • Guardians observing the chain generate shares for whatever rec.pke is at that moment. If the attacker’s replay wins the race, that will be PKE_A, not PKE_B.
  • The user only holds the private key for PKE_B. They cannot decrypt shares encrypted to PKE_A, so recovery repeatedly fails.

This can permanently prevent a user who has actually lost their main device from ever recovering their account, as long as an attacker can keep replaying the old Recovery update (for eg, from a public mempool or any log of prior transactions).

The core problem is that recovery requests are authenticated but not tied to a specific state or time. It’s recommended to add freshness to the signed RecoveryRequestMessage and make sure to enforce it in verify_update.

View detailed Proof of Concept

Swafe mitigated:

  • Add cnt_acc (mirrors account version) and cnt_rec in RecoveryStateV0.
  • verify_update checks recovery.cnt == old.cnt and increments cnt_rec on acceptance
  • Each recovery request is valid only for the exact state version it targets

Status: Mitigation confirmed. Full details in reports from montecristo, niffylord, and DCENT09.


[M-05] Unbounded associations per account make recovery initiation linear-time

Submitted by johnyfwesh, also found by 0xcb90f054, DCENT09, legendweb3, and wuji

lib/src/account/v0.rs #L100-L106

Recovery associations are stored in an unbounded Vec and every recovery initiation scans that entire list. RecoveryStateV0 carries all associations in assoc: Vec<AssociationsV0> with no cap or pagination (lib/src/account/v0.rs:100-106). AccountSecrets::add_association appends a new entry on each call without limits or rate controls (lib/src/account/v0.rs:605-619).

When recovery is initiated, AccountStateV0::initiate_recovery iterates over self.rec.assoc and attempts a symmetric decrypt against every stored association until one matches (lib/src/account/v0.rs:171-226). Recovery updates are also verified by linearly scanning rec.assoc and checking signatures (lib/src/account/v0.rs:786-833).

Because there is no bound or indexing structure, recovery initiation and verification remain in the number of associations, and n is fully controllable by the account owner (or any party with the signing key) via repeated add_association calls.

Affected Code

// lib/src/account/v0.rs:100-106
#[derive(Serialize, Deserialize, Clone)]
pub(crate) struct RecoveryStateV0 {
    pub pke: Option<pke::EncryptionKey>, // this is set iff. recovery has been started
    pub(crate) assoc: Vec<AssociationsV0>, // encryption of the recovery authorization key
    pub(crate) social: BackupCiphertext, // social backup ciphertext
    pub(crate) enc_msk: sym::AEADCiphertext, // encrypted MSK (encrypted with key derived from RIK and social shares)
}
  • All associations are kept in an unbounded Vec, so subsequent consumers must linearly scan the list.
// lib/src/account/v0.rs:605-619
pub fn add_association<R: Rng + CryptoRng>(
    &mut self,
    rng: &mut R,
) -> Result<RecoveryInitiationKey> {
    self.dirty = true;

    // generate fresh RIK for this association
    let rik = RecoveryInitiationKey::gen(rng);

    // Add to existing associations
    self.recovery
        .assoc
        .push(AssociationSecretV0 { rik: rik.clone() });
    Ok(rik)
}
  • Association addition simply pushes into the vector with no limit, deduplication, or guardrails.
// lib/src/account/v0.rs:171-226
pub fn initiate_recovery<R: Rng + CryptoRng>(
    &self,
    rng: &mut R,
    acc: AccountId,
    rik: &RecoveryInitiationKey,
) -> Result<(AccountUpdate, RecoverySecrets)> {
    // decrypt AssociationsV0 using RIK
    let encap = self
        .rec
        .assoc
        .iter()
        .find_map(|assoc| {
            // attempt to decrypt the encapsulated key using RIK
            let encap = sym::open::<EncapV0, _>(rik.as_bytes(), &assoc.encap, &acc).ok()?;

            // check if the verification key matches the expected one
            if encap.key_sig.verification_key() != assoc.sig {
                None
            } else {
                Some(encap)
            }
        })
        .ok_or(SwafeError::InvalidRecoveryKey)?;
    // ...
}
  • Each initiation attempt decrypts every stored association until it finds a match, making the runtime linear in the number of associations.
// lib/src/account/v0.rs:786-833
pub(super) fn verify_update(self, old: &AccountStateV0) -> Result<AccountStateV0> {
    match self.msg {
        AccountMessageV0::Recovery(recovery) => {
            let mut new_state = old.clone();
            {
                let rec = &mut new_state.rec;
                // ...
                for assoc in &rec.assoc {
                    // Verify signature using the recovery signing key from associations
                    if assoc.sig.verify(&recovery.sig, &recovery_msg).is_ok() {
                        verified = true;
                        break;
                    }
                }
                if !verified {
                    return Err(SwafeError::InvalidSignature);
                }
                // Set the recovery PKE to indicate recovery has been initiated
                rec.pke = Some(recovery.pke);
            }
            Ok(new_state)
        }
        // ...
    }
}
  • Recovery updates are also verified by iterating over the entire association list, compounding the per-operation cost as the vector grows.

Impact

  • Recovery initiation and verification perform decryptions/signature checks per attempt, so accounts with many associations can trigger high latency, timeouts, or resource exhaustion in constrained or multi-tenant environments.
  • Large association vectors also inflate persisted account state, increasing storage and serialization/deserialization overhead for all operations involving that account.

View detailed Proof of Concept

  • Enforce an upper bound or quota on associations per account (reject additions beyond the cap, or require revocation/rotation).
  • Replace the linear scan with keyed lookup (e.g., index by verification key or association identifier) or require callers to specify which association they intend to use, validated against stored metadata.
  • Add pagination or rate limiting around recovery initiation to prevent excessive per-request work, and prune or deduplicate stale associations during updates.

References

Swafe mitigated:

  • Add MAX_ASSOCIATIONS = 16 constant
  • add_association rejects at cap with TooManyAssociations
  • verify_allocation and verify_update reject submitted states exceeding cap
  • Contract error mapping for new variant

Status: Mitigation confirmed. Full details in reports from montecristo, niffylord, and DCENT09.


[M-06] Recovery can be done without Guardians’ approvals by looking at initial account update tx data

Submitted by montecristo

lib/src/account/v0.rs #L436

In general, recovery can only be done with Guardians’ approvals, since we need to recover secret from guardians’ shares and use secret + msk_ss_rik to calculate key_data, which will then be used to decrypt the original data (in social recovery, data is msk_ss_social).

However, when an account is initially registered, AccountUpdate struct’s social recovery information is unsafely generated with weak parameters (threshold = 0). This allows derivation of key_data and key_meta without relying on guardian shares.

AccountUpdate of initial account registration is registered on chain and publicly available. So anyone with msk_ss_rik can look up transaction history and can decrypt original data (msk_ss_social) without possession of guardian shares, because key_data and key_meta of the initial social recovery can be calculated deterministically.

With msk_ss_rik and msk_ss_social, one can derive msk_decryption_key and recover account’s msk from AccountState.rec.enc_msk

This is a violation of one of the main invariants:

Recovery of a backup only occurs when more than the specified threshold of Guardians has approved the request.

Finding description and impact

The root cause stems from the following facts:

  1. When account secret is generated, recovery.social is created with default parameters (guardians = [], threshold = 0) trace1, trace2

    File: lib/src/account/v0.rs

    409:         let social = create_recovery(
    410:             rng, //
    411:             acc,
    412:             &msk_ss_rik,
    413:             &msk_ss_social,
    414:@>           &[],
    415:@>           0,
    416:         )?;
  2. This allows deterministic derivation of key_meta and key_data without relying on guardians’ shares

    1. sss::share will return (Fr::ZERO, []) for threshold 0 trace1, trace2

      File: lib/src/backup/v0.rs

      // @audit threshold = 0, guardians = []
      380: let (secret, shares) = sss::share(rng, threshold, guardians.len());

      File: lib/src/crypto/sss.rs

      32: pub(crate) fn share<R: RngCore + CryptoRng>(
      33:     rng: &mut R,
      34:     t: usize,
      35:     n: usize,
      36: ) -> (Secret, Vec<Share>) {
      37:     // a threshold 0 sharing is just a constant
      38:     if t == 0 {
      39:@>       return (Secret(pp::Fr::ZERO), vec![]);
      40:     }
    2. As a result, key_meta and key_data can be directly calculated from msk_ss_rik

      1. key_meta = kdfn(msk_ss_rik, "KDFMetakey" || [])

        File: lib/src/backup/v0.rs

        // @audit comms = []
        404: let key_meta: [u8; sym::SIZE_KEY] = kdfn(sym_key, &KDFMetakey { comms: &comms });
      2. key_data = kdfn("BackupKDFInput" || msk_ss_rik || Fr::ZERO, "EmptyInfo")

        File: lib/src/backup/v0.rs

        409: let key_data: [u8; sym::SIZE_KEY] = kdfn(
        410:             &BackupKDFInput {
        411:                 key: sym_key,
        412:                 secret, // @audit secret = 0
        413:             },
        414:             &EmptyInfo,
        415:         );
  3. This recovery.social information is available on chain because it’s included in initial account update transaction trace1, trace2

    File: lib/src/account/v0.rs

    635:     pub fn update<R: Rng + CryptoRng>(&self, rng: &mut R) -> Result<AccountUpdate> {
    ...
    703:         let st = AccountStateV0 {
    ...
    710:             rec: RecoveryStateV0 {
    711:                 pke: None,
    712:                 assoc,
    713:                 // TODO: unfortunately we cannot generate this anew every time
    714:@>               social: self.recovery.social.clone(),
    715:                 enc_msk,
    716:             },
    717:         };
    718: 
    719:         let sig = self.old_sig.sign(rng, &st);
    720:         Ok(AccountUpdate::V0(AccountUpdateV0 {
    721:             acc: self.acc,
    722:@>           msg: AccountMessageV0::Update(AccountUpdateFullV0 { sig, state: st }),
    723:         }))

    This means anyone with the knowledge of msk_ss_rik can recover msk_ss_social because:

  4. Since initial account_update.rec.social is visible on chain, social recovery BackupCipherText is known
  5. This initial recovery is generated with known parameters i.e. threshold = 0, guardians = []
  6. BackupCipherText is comprised of the following values ref:

    1. data = Sym.Seal(key_meta, BackupMetadata(Sym.Seal(key_data, msk_ss_social))) ref
    2. encap = [] because threshold = 0
    3. comms = [] due to the same reason
  7. Since key_data and key_meta can be calculated from msk_ss_rik, msk_ss_social can be decrypted from BackupCipherText

With msk_ss_rik and msk_ss_social, one can derive msk_decryption_key and recover account’s msk.

Impact

This is a violation of one of the main invariants in the README:

  • Recovery of a backup only occurs when more than the specified threshold of Guardians has approved the request.

FAQ

Q: This attack requires the knowledge of msk_ss_rik, which is a significant restriction.

A: Correct, however I believe this report is worth Medium severity because it violates the main invariant. Without the invariant, I wouldn’t have submitted this issue. Moreover, key_data and key_meta should be only retrieved from guardians’ approval, even for the party that knows msk_ss_rik. That’s because guardians “guard” secret, from which key_data is derived. However, the protocol publicly shares unsafe encryption of msk_ss_social by mistake.

Q: How do you know initial account update will always contain unsafe social recovery?

A: Because AccountUpdate.msg.state.rec.social property is mandatory and will be very likely to be filled with the explained library logic

AccountUpdate -> AccountUpdateV0 -> AccountMessageV0 -> AccountUpdateFullV0 -> AccountStateV0 -> RecoveryStateV0 -> social: BackupCipherText

File: lib/src/account/v0.rs

101: pub(crate) struct RecoveryStateV0 {
102:     pub pke: Option<pke::EncryptionKey>, // this is set iff. recovery has been started
103:     pub(crate) assoc: Vec<AssociationsV0>, // encryption of the recovery authorization key
104:@>   pub(crate) social: BackupCiphertext, // social backup ciphertext
105:     pub(crate) enc_msk: sym::AEADCiphertext, // encrypted MSK (encrypted with key derived from RIK and social shares)
106: }

Q: This only happens for account that did not initiate social recovery. If the account initiates a social recovery, rec.social will be updated with BackupCipherText generated with non-zero threshold and guardians.

A: Wrong. Accounts that initiated social recovery are also affected by this issue. Although new rec.social will be much more secure with non-zero threshold, initial account update transaction and account_update information is still visible on chain. Anyone with msk_ss_rik can recover initial msk_ss_social by looking at initial account update string. And with msk_ss_rik and the initial msk_ss_social, they can recover account’s msk. This breaks the main invariant.

Q: This report is wrong because msk_ss_social is rotated on recovery initiation. So even if you decrypted old mks_ss_social, it’s outdated and useless.

A: Nope, the main impact of this issue is msk recover without guardians’ approval, and we can decrypt msk from old msk_ss_social and old enc_msk, recorded on initial account update. More specifically, during account initialization, AccountUpdate contains msg.state.rec.enc_msk, which is generated as Sym.Seal(kdfn(msk_ss_rik, msk_ss_social_old), msk) ref. Since msk_ss_rik does not change, we can decrypt msk from enc_msk_old with msk_ss_rik and msk_ss_social_old.

Don’t generate initial social recovery with trivial parameters.

Make social property optional and fill it only when recovery is initiated.

Swafe mitigated:

  • Set threshold=1 in gen() so initial recovery backup is unrecoverable (0 shares < threshold 1)
  • Move guardians.len() < threshold check from BackupCiphertextV0::new to public backup() API, allowing internal create_recovery() to produce intentionally unrecoverable backups

Status: Mitigation confirmed. Full details in reports from montecristo, niffylord, and DCENT09.


[M-07] Unable to upload guardian shares on social backup

Submitted by montecristo, also found by 0xvd, ChainSentry, count-sum, Egbe, eightzerofour, Goodman, JuggerNaut63, niffylord, RotiTelur, SanketKogekar, ScientificKatie420, ShredSecurity, Spektor, and touristS

contracts/src/http/endpoints/reconstruction/upload_share.rs #L48-L50

The upload-share endpoint fails to locate social backup ciphertexts because it only searches the AccountState.backups field while social backups are stored separately in AccountState.rec.social. This prevents guardians from uploading their shares, making social recovery impossible.

Finding description and impact

Social backup is a special recovery backup that stores the social recovery key ($\mathsf{msk\_ss\_social}$) and distributes its encryption to guardians.

The whole integration flow is demonstrated in SwafeContractTest::testNewRecoveryFlow, and is summarized as the following:

  1. Owner and guardian accounts are allocated
  2. Recovery setup

    1. Account update string will apply the following changes trace1, trace2, trace3

      1. AccountSecrets.recovery.social is set to $\mathsf{BackupCipherText.Gen}(\mathsf{msk = msk\_ss\_rik}, \mathsf{data = msk\_ss\_social})$ trace1, trace2, trace3
      2. $\mathsf{rik}$ is generated and pushed to AccountSecrets.recovery.assoc
  3. Account is updated on chain using the above account update string

    At this point, $\mathsf{BackupCipherText}$ is stored in AccountState.rec.social, as we can confirm in the below code:

    File: lib/src/account/v0.rs

    635:     pub fn update<R: Rng + CryptoRng>(&self, rng: &mut R) -> Result<AccountUpdate> {
    ...
    703:         let st = AccountStateV0 {
    704:             cnt,
    705:             backups: self.backups.clone(),
    706:             recover: self.recover.clone(),
    707:             pke: self.pke.encryption_key(),
    708:             sig: self.sig.verification_key(),
    709:             act,
    710:             rec: RecoveryStateV0 {
    711:                 pke: None,
    712:                 assoc,
    713:                 // TODO: unfortunately we cannot generate this anew every time
    714:@>               social: self.recovery.social.clone(),
    715:                 enc_msk,
    716:             },
    717:         };
  4. Recovery initiation

    1. This step generates an encryption key pair (pke, dke) and generates AccountUpdateRecovery request trace1, trace2, trace3
  5. Submits AccountUpdateRecovery request to the contract

    1. By this update, AccountState.rec.pke will be set trace1, trace2, trace3
  6. Generate guardian shares
  7. Verify guardian shares

The test ends here, but we’ll have to upload guardian share to guardian nodes for guardian distribution. Otherwise, guardian shares are not available on nodes when recovery request is received.

If we take a look at upload-share handler:

File: contracts/src/http/endpoints/reconstruction/upload_share.rs

33: pub fn handler(
34:     mut ctx: OffChainContext,
35:     state: ContractState,
36:     request: HttpRequestData,
37:     _params: Params,
38: ) -> Result<HttpResponseData, ContractError> {
39:     let request: Request = deserialize_request_body(&request)?;
40: 
41:     let backup_id = request.backup_id.0;
42:     let account_id = request.account_id.0;
43: 
44:     let account = state
45:         .get_account(account_id)
46:         .ok_or_else(|| ServerError::NotFound("Account not found".to_string()))?;
47: 
48:@>   let backup: &BackupCiphertext = account.recover_id(backup_id).ok_or_else(|| {
49:         ServerError::NotFound(format!("Backup not found for backup_id: {}", backup_id))
50:     })?;

We need to provide account_id, backup_id to identify $\mathsf{BackupCipherText}$ stored in step 3, and later match it to our guardian share.

However, since account.recover_id finds backups only from AccountState.backups(trace1, trace2, trace3), upload-share endpoint will revert with “Backup not found” error.

Impact

  • Unable to upload guardian shares generated for social backup
  • Guardians cannot recover $\mathsf{msk\_ss\_social}$ because they do not have corresponding guardian shares in their nodes

Look backup in AccountState.backups and AccountState.rec.social.

Swafe mitigated:

self.backups.iter().collect()self.recover.iter().chain(once(&self.rec.social)).collect()

Status: Mitigation confirmed. Full details in reports from montecristo, niffylord, and DCENT09.


Informational Issues

For this audit, 35 QA reports were submitted by wardens compiling low risk and informational issues. The QA report highlighted below by 1AutumnLeaf777 received the top score from the judge. 24 Low-severity findings were also submitted individually, and can be viewed here.

The following wardens also submitted QA reports: 0x_DyDx, 0xFBI, 0xki, 0xnija, 0xozovehe, 0xshdax, adecs, Agontuk, ameng, Angry_Mustache_Man, arunabha003, Auditor_Nate, AuditShield, bbl4de, cosin3, happykilling, jerry0422, K42, legat, lioblaze, Manvita, montecristo, NexusAudits, oade_hacks, redfox, Seeker, Sparrow, TheCarrot, Tigerfrake, valarislife, vangrim, vt729830, Wojack, and zubyoz.

[01] Off-by-one error in guardian share index validation causes panic

The verify function in BackupCiphertextV0 uses > instead of >= when validating the share index:

pub fn verify(&self, share: &GuardianShareV0) -> Result<u32, SwafeError> {
    if share.idx > self.comms.len() as u32 {
        return Err(SwafeError::InvalidShare);
    }
    self.comms[share.idx as usize].vk.verify(  // panics if idx == len
        &share.sig,
        &SignedEncryptedShare { ct: &share.ct, idx: share.idx },
    )?;
    Ok(share.idx)
}

If share.idx == self.comms.len(), the check passes but the array access panics with index out of bounds.

Impact

A malicious guardian share with idx equal to the length of the commitments array can crash the node processing the share. This could be used as a denial of service vector during backup recovery.

Change > to >=:

if share.idx >= self.comms.len() as u32 {
    return Err(SwafeError::InvalidShare);
}

[02] AEAD seal function does not require CryptoRng

The seal function accepts any Rng instead of requiring CryptoRng:

pub(crate) fn seal<M: Tagged, A: Tagged, R: Rng>(
    rng: &mut R,
    key: &Key,
    pt: &M,
    ad: &A,
) -> AEADCiphertext {

Impact

If a caller passes a non-cryptographic RNG (e.g., rand::rngs::SmallRng), the nonce generation would be predictable, potentially compromising the encryption. While current callers use CryptoRng, the type signature does not enforce this requirement.

Change the bound to require CryptoRng:

pub(crate) fn seal<M: Tagged, A: Tagged, R: Rng + CryptoRng>(

[03] Missing zeroize on VDRF secret key share

The VdrfSecretKeyShare struct holds a secret field element but does not implement Zeroize or ZeroizeOnDrop:

#[derive(Serialize, Deserialize, Clone)]
pub struct VdrfSecretKeyShare(#[serde(with = "...")] pp::Fr);

Compare to DecryptionKey which properly implements zeroization:

impl Drop for DecryptionKey {
    fn drop(&mut self) {
        self.sk.zeroize();
    }
}

Impact

Secret key material may persist in memory after the struct is dropped, increasing the window for memory disclosure attacks.

Add ZeroizeOnDrop derive or implement Drop with zeroization:

#[derive(Serialize, Deserialize, Clone, ZeroizeOnDrop)]
pub struct VdrfSecretKeyShare(...);

[04] Schnorr signature hash order differs from specification

The specification documents the Schnorr signature challenge hash as:

c <- H(pk^(sign), delta, m)

Hash order: (pk, R, message)

The implementation uses a different order:

let e = pp::Fr::from_le_bytes_mod_order(&hash(&SchnorrHash {
    r: sig.r,
    pk: self.clone(),
    message: hash(msg),
}));

Hash order: (R, pk, message)

Impact

This represents a deviation from the documented specification. While both orderings are cryptographically secure, any external implementation following the specification would produce incompatible signatures.

Align the specification and implementation to use the same hash order.

[05] PKE multi-recipient scheme differs significantly from documented design

The specification describes a multi-recipient encryption scheme using a single shared symmetric key with XOR-masked deltas:

PK.Enc((pk1,...,pkn), (msg1,...,msgn), ctx)
  sk* <-$ F
  pk* <- [sk*].G
  key <-$ {0,1}^256
  forall i. delta_i <- key XOR KDF([sk*].pk_i, pk_i || pk*)
  senc <- AEAD.Seal(key, msg, ad=(ctx, pk*, delta1,...,deltan), nonce=0)

The implementation uses a fundamentally different construction with separate ciphertexts per recipient and a signature for binding:

pub fn batch_encrypt(...) -> BatchCiphertext {
    let sk = sig::SigningKey::gen(rng);
    let vk = sk.verification_key();
    let cts = msgs.map(|(key, msg)| {
        key.encrypt(rng, &msg, &BatchCtx { vk: &vk, ctx: ... })
    }).collect();
    let sig = sk.sign(rng, &inn);
    BatchCiphertext::V0(BatchCiphertextV0 { inn, sig })
}

Key differences:

  • Documented: 1 shared key, 1 AEAD ciphertext, XOR-masked deltas
  • Implemented: N separate keys, N AEAD ciphertexts, signature binding

Impact

The implementation deviates significantly from the documented specification. The schemes have different security properties and ciphertext structures. Any external implementation following the specification would be incompatible.

Update the specification to accurately document the implemented scheme and its security assumptions (Gap-CDH in G1).

[06] Pedersen generators both derived via hash-to-curve instead of using standard generator

The specification describes:

  • G: Standard generator point
  • H: Result of hash_to_curve(b"generator:value")

The implementation derives both generators via hash-to-curve:

Self {
    h: pp::hash_to_g1(&PedersenGenSep { name: "H" }),
    g: pp::hash_to_g1(&PedersenGenSep { name: "G" }),
}

Impact

This deviates from the documented specification. While both approaches maintain the required security property (unknown discrete log relationship between G and H), the implementation does not match the documented design.

Update the specification to match the implementation.

[07] VDRF input is pre-hashed before use, deviating from spec

The specification describes:

K <- H(C0 || input)

The implementation pre-hashes the input before concatenation:

let pnt = pp::hash_to_g2(&VdrfKPoint {
    c0: public_key.c0,
    input: hash(input),  // pre-hashed
});

Impact

The implementation deviates from the documented specification. External implementations following the spec would compute different VDRF outputs for the same inputs.

Update the specification to reflect the pre-hashing step.

[08] Recovery KDF includes additional domain separation not documented in spec

The specification describes:

msk_dec_key = KDF(msk_ss_rik || msk_ss_social, epsilon)

Where epsilon is an empty info parameter.

The implementation includes the account ID and a domain separator:

impl Tagged for MskRecoveryInfo<'_> {
    const SEPARATOR: &'static str = "v0:msk-recovery-kdf";
}

hash::kdfn(
    &MskRecoveryShares { msk_ss_rik, msk_ss_social },
    &MskRecoveryInfo { acc },  // includes account_id
)

Impact

The implementation deviates from the documented specification by including additional context in the KDF. External implementations following the spec would derive different keys, causing decryption failures.

Update the specification to document the additional domain separation and account ID binding.

[09] Email certificate rejects future timestamps instead of allowing symmetric ±5 minute window

The specification in association.html defines a symmetric timestamp validation window:

Check time >= current time - 5 min
Check time <= current time + 5 min

The implementation rejects any timestamp in the future:

// lib/src/crypto/email_cert.rs:101-113

// Check if certificate is from the future
if ts > now {
    return Err(SwafeError::CertificateFromFuture);
}

// Check if certificate is expired
if now
    .duration_since(ts)
    .map_err(|_| SwafeError::CertificateExpired)?
    > VALIDITY_PERIOD
{
    return Err(SwafeError::CertificateExpired);
}

Impact

When the Swafe server’s clock is ahead of an off-chain node’s clock (common in distributed systems), certificates are rejected with CertificateFromFuture. Users must wait for the node’s clock to catch up before the certificate becomes valid. While users can retry after waiting, this deviates from the documented specification and creates unnecessary friction during email association and recovery flows. The error message may also confuse users into thinking the certificate is permanently invalid rather than temporarily unusable.

Implement the symmetric window as specified:

// Allow timestamps up to 5 minutes in the future
if let Ok(future_diff) = ts.duration_since(now) {
    if future_diff > VALIDITY_PERIOD {
        return Err(SwafeError::CertificateFromFuture);
    }
}

// Check if certificate is expired (too far in the past)
if let Ok(past_diff) = now.duration_since(ts) {
    if past_diff > VALIDITY_PERIOD {
        return Err(SwafeError::CertificateExpired);
    }
}

Detailed Proofs of Concept for the above-listed issues may be viewed here


Mitigation Review

Introduction

Following the C4 audit, 3 wardens (montecristo, niffylord, DCENT09) reviewed the mitigations of 7 Medium, 14 Lows, and 2 QA items in the audit report. Additional details can be found within the Swafe Mitigation Review repositories:

Mitigation Review Scope & Summary

During the mitigation review, the wardens confirmed that all in-scope findings were mitigated.

The table below provides details regarding the status of each in-scope vulnerability from the original audit.

Original Issue Status Mitigation URL
M-01 🟢 Mitigation Confirmed swafe-lib PR 151
M-02 🟢 Mitigation Confirmed swafe-lib PR 147
M-03 🟢 Mitigation Confirmed swafe-lib PR 153
M-04 🟢 Mitigation Confirmed swafe-lib PR 154
M-05 🟢 Mitigation Confirmed swafe-lib PR 152
M-06 🟢 Mitigation Confirmed swafe-lib PR 156
M-07 🟢 Mitigation Confirmed swafe-lib PR 147
S-12 (Low) 🟢 Mitigation Confirmed swafe-lib PR 151
S-210 (Low) 🟢 Mitigation Confirmed swafe-lib PR 151
S-256 (Low) 🟢 Mitigation Confirmed swafe-lib PR 155
S-867 (Low) 🟢 Mitigation Confirmed swafe-lib PR 157
S-1145 (Low) 🟢 Mitigation Confirmed swafe-lib PR 157
S-1089 (Low) 🟢 Mitigation Confirmed swafe-lib PR 158
S-207 (Low) 🟢 Mitigation Confirmed swafe-lib PR 207
S-127 (Low) 🟢 Mitigation Confirmed swafe-lib PR 159
S-508 (Low) 🟢 Mitigation Confirmed swafe-lib PR 159
S-401 (Low) 🟢 Mitigation Confirmed swafe-lib PR 159
S-475 (Low) 🟢 Mitigation Confirmed swafe-lib PR 160
S-1105 (Low) 🟢 Mitigation Confirmed swafe-lib PR 161
S-215 (Low) 🟢 Mitigation Confirmed swafe-lib PR 161
S-1236: QA-10/QA-11 (QA) 🟢 Mitigation Confirmed swafe-lib PR 161
S-1163 (Low) 🟢 Mitigation Confirmed swafe-lib PR 163

Disclosures

C4 audits incentivize the discovery of exploits, vulnerabilities, and bugs in smart contracts. Security researchers are rewarded at an increasing rate for finding higher-risk issues. Audit submissions are judged by a knowledgeable security researcher and disclosed to sponsoring developers. C4 does not conduct formal verification regarding the provided code but instead provides final verification.

C4 does not provide any guarantee or warranty regarding the security of this project. All smart contract software should be used at the sole risk and responsibility of users.