Unruggable Invitational
Findings & Analysis Report
2025-02-24
Table of contents
- Summary
- Scope
- Severity Criteria
-
Low Risk and Non-Critical Issues
- 01 Underflow panic before intended custom error
- 02 Return argument count mismatch
- 03 In some cases, a prover might skip account state verification and artificially return 0 storage value, due to vulnerable
getStorage
implementation. - 04
GatewayFetchTarget::fetch/fetchCallback
not in full compliance with eip-3668 - 05 Vulnerable left padding when decoding storage slot values returned from
secureMerkleTrie.get
- 06 Risk of infinite loop causing gas exhaustion; consider adding checks on max depth of recursive calls.
- [07] Missing check on compressed flags, allowing proof with invalid flag to pass
- [08] OPFaultVerifier’s game time logic allows proof generation from future states
- [09] Some invalid proof will not be reverted due to vulnerable control flow in
ScrollVerifierHooks
-
- Introduction
- Mitigation Review Summary
- Mitigation of [M-01]
- Mitigation of [M-02]
- Mitigation of additional finding [F-10]
- Additional Review Summary
- M-01
OPVerifier
incorrectly verify state with unfinalized nodes whenminAgeSec == 0
- L-01
MerkleTrie
should reject un-used proofs - L-02
ScrollVerifierHooks
should ensure there are no un-used proofs
- Disclosures
Overview
About C4
Code4rena (C4) is an open organization consisting of security researchers, auditors, developers, and individuals with domain expertise in smart contracts.
A C4 audit is an event in which community participants, referred to as Wardens, review, audit, or analyze smart contract logic in exchange for a bounty provided by sponsoring projects.
During the audit outlined in this document, C4 conducted an analysis of a scoped subset of the verifier smart contracts from the Unruggable Gateways codebase. The audit took place from December 06 to December 20, 2024.
This audit was judged by Picodes.
Final report assembled by Code4rena.
Following the C4 audit, warden peakbolt reviewed the mitigations for sponsor addressed issues from the original audit and also completed an additional full review of the codebase. The mitigation review report and additional review are appended in this audit report.
Summary
The C4 analysis yielded an aggregated total of 2 unique findings. Of these findings, 0 received a risk rating in the category of HIGH severity and 2 received a risk rating in the category of MEDIUM severity.
Additionally, C4 analysis included 4 reports detailing issues with a risk rating of LOW severity or non-critical.
All of the issues presented here are linked back to their original finding.
Scope
The code under review can be found within the C4 Unruggable repository, and is composed of 17 smart contracts written in the Solidity programming language and includes 1433 lines of Solidity code.
Severity Criteria
C4 assesses the severity of disclosed findings based on three primary risk categories: high, medium, and low/non-critical.
High-level considerations for findings span the following key areas when conducting assessments:
- Malicious Input Handling
- Escalation of privileges
- Arithmetic
- Gas use
For more information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website, specifically our section on Severity Categorization.
Medium Risk Findings (2)
[M-01] zkTrie maximum depth limit is not enforced in Scroll
Submitted by Evo
Impact
The walkTree
method in ScrollVerifierHooks.sol lacks implementation of maximum depth limit of 248 bits that is specified in Scroll’s zkTrie documentation. This causes the Scroll verifier to work differently than intended, potentially accepting proofs that should be invalid according to the protocol specification.
Proof of Concept
From Scroll’s official documentation, zkTrie is explicitly specified to have these constraints:
“We limit the maximum depth of zkTrie to 248, meaning that the tree will only traverse the lower 248 bits of the key. Because the secure key space is a finite field used by Poseidon hash that doesn’t occupy the full range of 2^256, the bit representation of the key can be ambiguous in the finite field and thus results in a soundness issue in the zk circuit.”
Looking at scroll contracts implementation of walkTree
also the check is there.
However, reviewing the walkTree
implementation in ScrollVerifierHooks.sol:
function walkTree(
bytes32 key,
bytes memory encodedProof,
bytes32 rootHash,
uint256 leafSize
) internal view returns (bytes32 keyHash, bytes32 h, bytes memory v, bool exists) {
bytes[] memory proof = abi.decode(encodedProof, (bytes[]));
keyHash = poseidonHash1(key);
h = rootHash;
for (uint256 i; ; i++) {
// Missing depth limit check: if (i >= 248) revert InvalidProof();
if (i == proof.length) revert InvalidProof();
v = proof[i];
if (v.length == 0) revert InvalidProof();
uint256 nodeType = uint8(v[0]);
if (nodeType == NODE_LEAF) {
bytes32 temp;
assembly {
temp := mload(add(v, 33))
}
if (temp == keyHash) {
assembly {
temp := mload(add(v, leafSize))
}
if (temp != key) revert InvalidProof();
exists = true;
} else {
bytes32 p = bytes32((1 << i) - 1); // prefix mask
if ((temp & p) != (keyHash & p)) revert InvalidProof();
keyHash = temp;
}
break;
}
// ... rest of the code
}
}
The implementation clearly not implementing any maximum depth limit check in the main loop allowing proofs deeper than 248 levels.
Recommended Mitigation Steps
Add depth limit check as specified in documentation:
// In main loop
if (i >= 248) revert InvalidProof();
Or, apply the same check in scroll contract implementation.
I agree with this. I was not aware of this limit.
thomasclowes (Unruggable) confirmed
Unruggable mitigated:
Added Scroll Max Trie Depth Guard here.
Status: Mitigation confirmed. Issue is resolved with the added MAX_TRIE_DEPTH (248)
check that ensures provided proof does not exceed the max depth of 248.
[M-02] OPFaultVerifier
ingests games that resolve incorrectly
Submitted by Bauchibred
Take a look at the OPFaultVerifier
implementation:
interface IOptimismPortal {
function disputeGameFactory() external view returns (IDisputeGameFactory);
function respectedGameType() external view returns (GameType);
// we don't care if the root was blacklisted, this only applies to withdrawals
//function disputeGameBlacklist(IDisputeGame game) external view returns (bool);
}
..snip
function getStorageValues(
bytes memory context,
GatewayRequest memory req,
bytes memory proof
) external view returns (bytes[] memory, uint8 exitCode) {
uint256 gameIndex1 = abi.decode(context, (uint256));
GatewayProof memory p = abi.decode(proof, (GatewayProof));
(, , IDisputeGame gameProxy, uint256 blockNumber) = _gameFinder.gameAtIndex(
_portal,
_minAgeSec,
_gameTypeBitMask,
p.gameIndex
);
require(blockNumber != 0, "OPFault: invalid game");
// @audit No blacklist check here
// ...
return GatewayVM.evalRequest(req, ProofSequence(0, p.outputRootProof.stateRoot, p.proofs, p.order, _hooks));
}
From the documentation, the Optimism Fault proof verifier assumes that blacklisted games only matters for withdrawals, where as this is one of the reason why the blacklisted logic is implemented (i.e., blockage of withdrawals), this is not the only reason.
To put this in retrospect, The optimism portal allows the GUARDIAN
to permanently blacklist a game, when the game resolves incorrectly, that’s to show this is no case to be trusted to be used in our verification logic.
This can be seen from the Optimism Portal itself.
See the implementation of the portal from the previous code4rena audit:
First, the blacklist is implemented as a simple mapping in the OptimismPortal2 contract:
/// @notice A mapping of dispute game addresses to whether or not they are blacklisted.
mapping(IDisputeGame => bool) public disputeGameBlacklist;
Only the guardian can blacklist dispute games and this is done only in the case where the game is resolving incorrectly:
/// @notice Blacklists a dispute game. Should only be used in the event that a dispute game resolves incorrectly.
/// @param _disputeGame Dispute game to blacklist.
function blacklistDisputeGame(IDisputeGame _disputeGame) external {
if (msg.sender != guardian()) revert Unauthorized();
disputeGameBlacklist[_disputeGame] = true;
emit DisputeGameBlacklisted(_disputeGame);
}
Which then brings us to the fact that if a game is blacklisted, then it is a wrong premise to be used for verification and we should check for this in the Verifier.
Impact
High. While the primary and most visible impact of blacklisting a game is blocking withdrawals, the blacklisting mechanism actually invalidates a dispute game entirely since they are resolving incorrectly which then means we shouldn’t use this in our OP Verifiers, otherwise we would be accepting an incorrectly resolving game.
Recommended Mitigation Steps
Ensure the game being verified is not blacklisted.
thomasclowes (Unruggable) acknowledged and commented:
The Optimism documentation is lacking, and this rollup tech is nascent in nature. From our discussions with the OP team the blacklisting is purely utilised for withdrawals. This is corroborated within the smart contracts.
The overhead of checking against the blacklist is significant, and it is unclear how to verify against it in this context noting that the system wasn’t built with this use case in mind.
We acknowledge this, but need to converse with the OP team directly as they iterate on their tech moving forward to be able to utilize this in any meaningful way. It was an intentional ommission.
Downgrading to Low for “unclear documentation” following this remark.
Bauchibred (warden) commented:
@Picodes, thanks for judging, but I can’t seem to understand the POV of the sponsors and take it at face value.
I don’t see how we should be allowing a blacklisted game in Unruggable’s OPVerifier, considering blacklisting a game on Optimism is a completely irreversible process. This is only done once the game is concluded to have been resolved incorrectly by the Guardian.
Does this block withdrawals? 100%, but the game is never permanently blacklisted if it didn’t resolve incorrectly and once done, it can not be un-done. In my opinion, it seems counterintuitive for sponsors to rely on the fact that it’s purely for withdrawals and have it here in the OPVerifier.
@bauchibred - indeed digging into this this isn’t about Unruggable’s documentation, but about Optimism. However, I’m seeing the conditions required for this to happen. I believe Medium severity is more appropriate (“Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.“)
thomasclowes (Unruggable) commented:
The argument here is that the blacklist is not implemented as a method of verifying data integrity when reading data from OP stack chains. It is purely implemented in the context of asset withdrawals as outlined by Optimism. That is to say, Optimism would not blacklist a game to invalidate data read integrity..
@thomasclowes - I get the argument; but to me, the argument is not if they “would not” blacklist this game, but if they “can” do it or not, isn’t it? As they can, your reasoning stands but this is a valid Medium following C4 rules, that you’re free to acknowledge.
thomasclowes (Unruggable) commented:
They ‘can’ blacklist the game, but it is a mechanism that is not used in the context of data verification (this use case).
@thomasclowes - is it explicitly outlined somewhere or is this from your conversation with the team?
thomasclowes (Unruggable) commented:
Conversations with the Optimism team, on their developers Discord. They stated that our use case for verifying state integrity is not what the blacklist is designed for and linked to the following documentation.
“Blacklisting a dispute game means that no withdrawals proven against it will be allowed to finalize (per the “Dispute Game Blacklist” invariant), and they must re-prove against a new dispute game that resolves correctly.”
Unruggable mitigated:
Mitigation here.
Status: Mitigation confirmed. Resolved by adding finalized mode validation to check that the selected game is finalized.
Low Risk and Non-Critical Issues
For this audit, 4 reports were submitted by wardens detailing low risk and non-critical issues. The report highlighted below by oakcobalt received the top score from the judge.
The following wardens also submitted reports: Bauchibred, rbserver, and ZanyBonzy.
[01] Underflow panic before intended custom error
GatewayVm::checkBack
checks whether stack will underflow before a rewind look-up. When checkBack
will cause underflow, a custom error GatewayVm::checkBack
will propagate.
The problem is underflow panic will happen before revert custom error. This means custom error will never propagate and the function will always panic first.
//contracts/GatewayVM.sol
function checkBack(
Machine memory vm,
uint256 back
) internal pure returns (uint256) {
if (back >= vm.stackSize) {
|> uint256 index = vm.stackSize - 1 - back;
revert InvalidStackIndex(index);
}
unchecked {
return vm.stackSize - 1 - back;
}
}
Recommendation
Change to int256 index and error InvalidStackIndex(int256 index)
.
[02] Return argument count mismatch
OPFaultGameFinder::gameAtIndex
returns 5 arguments, but IOPFaultGameFinder::gameAtIndex
only returns 4 arguments. The number of return argument variables mismatch between the interface used and the actual contract. Since the used return variables still match the order of the contract function. Runtime still behaves normally.
//contracts/op/OPFaultVerifier.sol
function getStorageValues(
bytes memory context,
GatewayRequest memory req,
bytes memory proof
) external view returns (bytes[] memory, uint8 exitCode) {
...
//@audit 4 return variables
|> (, , IDisputeGame gameProxy, uint256 blockNumber) = _gameFinder.gameAtIndex(
_portal,
_minAgeSec,
_gameTypeBitMask,
p.gameIndex
);
...
See in OPFaultGameFinder::gameAtIndex
, 5 variables are returned.
//contracts/op/OPFaultGameFinder.sol
function gameAtIndex(
IOptimismPortal portal,
uint256 minAgeSec,
uint256 gameTypeBitMask,
uint256 gameIndex
)
external
view
returns (uint256 gameType, uint256 created, IDisputeGame gameProxy, uint256 l2BlockNumber, bytes32 rootClaim)
{
...
Recommendation
Revise IOPFaultGameFinder::gameAtIndex
to return same arguments as OPFaultGameFinder and revise OPFaultVerifier::getStorageValues
accordingly.
[03] In some cases, a prover might skip account state verification and artificially return 0 storage value, due to vulnerable getStorage
implementation.
When an account doesn’t exist, querying its storage slot should return bytes32 0 value. The problem is the account is not validated to be non-existing in GatewayVM::getStorage
.
There are two issues here:
GatewayVM::getStorage
doesn’t check whether the account is verified to non-existent in the state key.vm.storageRoot
is always initialized as NOTACONTRACT(bytes32 0). So it’s not possible to distinguish an unvalidated storage root versus a validated EOA or non-existent storage root.
//contracts/GatewayVM.sol
function getStorage(Machine memory vm, uint256 slot) internal view returns (uint256) {
bytes memory proof = vm.readProof();
|> if (vm.storageRoot == NOT_A_CONTRACT) return 0;
return uint256(vm.proofs.hooks.verifyStorageValue(vm.storageRoot, vm.target, slot, proof));
}
If the Gateway request (ses.req
) passed to fetchCallBack()
is allowed to skip the set target instruction. The prover can prove a storage slot is bytes32(0)
in value even though the storage slot for the given target account has non-zero values.
//contracts/GatewayFetchTarget.sol
function fetchCallback(bytes calldata response, bytes calldata carry) external view {
//@audit if the passed carry is not sufficiently checked in a client contract that derives GatewayFetchTarget, ses.req can be modified to skip set target and artificially prove a storage slot is bytes32(0)
|> Session memory ses = abi.decode(carry, (Session));
(bytes[] memory values, uint8 exitCode) = ses.verifier.getStorageValues(ses.context, ses.req, response);
...
Suppose this case: A client contract that derives GatewayFetchTarget
has an insufficient check on the passed carry
is valid.
The prover passed in a modified ses.req
which directly call READ_SLOT
without SET_TARGET
. GatewayVM::getStorage
is invoked and because the account(target)
is never validated, vm.storageRoot
is 0 (same as NOT_A_CONTRACT
), GatewayVM assumes this target is validated to be not a contract (either EOA or non-existent) and return bytes32(0)
as the storage slot value.
Recommendation
Consider changing NOT_A_CONTRACT
constant value to be a special hash other than 0x0.
[04] GatewayFetchTarget::fetch/fetchCallback
not in full compliance with eip-3668
Based on eip-3668, both the ccip-read invocation function and its callback function MUST have the same return type.
The contract MUST also implement a callback method for decoding and validating the data returned by the gateway. The name of this method is implementation-specific, but it MUST have the signature (bytes response, bytes extraData), and MUST have the same return type as the function that reverted with OffchainLookup.
In GatewayFetchTarget, the callback function (GatewayFetchTarget::fetchCallback
) does not implement any explicit return type. It’s understandable that GatewayFetchTarget is agnostic to the data fetched cross-chain and their data type. But in such cases, to be in full compliant with eip-3668, fetchCallback
should explicitly return bytes memory
which also allows further type interpretation downstream.
//contracts/GatewayFetchTarget.sol
function fetchCallback(
bytes calldata response,
bytes calldata carry
) external view {
Session memory ses = abi.decode(carry, (Session));
(bytes[] memory values, uint8 exitCode) = ses.verifier.getStorageValues(
ses.context, //@audit-info note: is this context checked?
ses.req,
response
);
(bool ok, bytes memory ret) = address(this).staticcall(
abi.encodeWithSelector(ses.callback, values, exitCode, ses.carry)
);
if (ok) {
assembly {
return(add(ret, 32), mload(ret))
}
} else {
assembly {
revert(add(ret, 32), mload(ret))
}
}
}
Recommendation
Consider revising GatewayFetchTarget::fetchCallback
to explicitly return bytes memory
type.
[05] Vulnerable left padding when decoding storage slot values returned from secureMerkleTrie.get
When getting values from a storage slot by slot index (GatewayVM::getStorage
), EthVerifierHooks::verifyStorageValue
assumes the value of the slot could be less than 32 bytes, and will left-pad 0s if the value is less than 32 bytes.
//contracts/eth/EthVerifierHooks.sol
function verifyStorageValue(
bytes32 storageRoot,
address /* target */,
uint256 slot,
bytes memory proof
) external pure returns (bytes32) {
(bool exists, bytes memory v) = SecureMerkleTrie.get(
abi.encodePacked(slot),
abi.decode(proof, (bytes[])),
storageRoot
);
|> return exists ? RLPReaderExt.bytes32FromRLP(RLPReader.readBytes(v)) : bytes32(0);
}
In bytes32FromRLP
, we can confirm the left-padding and right-alignment when bytes length is less than 32.
//contracts/RLPReaderExt.sol
function bytes32FromRLP(bytes memory v) internal pure returns (bytes32) {
if (v.length > 32) revert ContentLengthMismatch();
return bytes32(v) >> ((32 - v.length) << 3);
}
This is vulnerable because (1) raw storage value in storage trie should be 32 bytes long even when empty; (2) if in any case, the proof’s decoded value of a storage slot is less than 32 bytes this is likely due to an error or exploit with an invalid proof, the tx should be reverted instead of left-padding.
Recommendation
Consider using RLPReaderExt.strictBytes32FromRLP
.
[06] Risk of infinite loop causing gas exhaustion; consider adding checks on max depth of recursive calls.
GatewayVM
-> EVAL
might recursively run vm.evalCommand
depending on the program. Currently, there are no checks on the max depth of vm.evalCommand
call, resulting in risk of infinite loop causing gas exhaustion.
//contracts/GatewayVM.sol
function evalCommand(Machine memory vm, bytes[] memory outputs) internal view returns (uint8 /*exitCode*/) {
...
} else if (op == GatewayOP.EVAL) {
bool cond = !vm.isStackZeros(vm.pop());
bytes memory program = vm.popAsBytes();
if (cond) {
(uint256 pos, bytes memory buf) = (vm.pos, vm.buf); // save program
vm.buf = program;
vm.pos = 0;
|> uint8 exitCode = vm.evalCommand(outputs);
if (exitCode != 0) return exitCode;
(vm.pos, vm.buf) = (pos, buf); // restore program
}
...
Recommendation
Add a check on the recursive call depth and set a max recursive call depth, revert when exceeded.
[07] Missing check on compressed flags, allowing proof with invalid flag to pass
Note: At the judge’s request, this downgraded issue from the same warden has been included in this report for completeness.
Finding description and impact
In scroll’s zktrie, account leaf node and storage leaf node, each has a compressed flag value which denotes whether its field cannot be represented as a field element. If not, the field element will be split into high128
/low128
bit before poseidonHash
the value.
The issue is in ScrollVerifierHooks::walkTree
/verifyAccountState
/verifyStorageValue
, the compressed flag value is part of the proof provided by the prover but never checked.
In addition, the compressed flag value is also never hashed as part of the leaf hash. As a result, a prover can pass any flag value to pass the check.
Proof of Concept
In Scroll’s zktrie, a compression flag is part of the required proof and is 4 bytes long. For example, in an account trie, a valid proof of leaf node contains:
NodeType (1 byte) + NodeKey( 32 byte) + CompressedFlag( 4 byte) + nonce||code size||0 (32 bytes) + balance(32 bytes) + storageRoot (32 bytes) + keccakCodeHash (32 bytes) + PoseidonCodeHash (32 bytes)
See checks in Scroll chain’s implementation. We can see the official scroll’s zkTrie verifier explicitly checked the compressedFlag
value.
function verifyAccountProof(hasher, _account, _ptr) -> ptr, storageRootHash, _stateRoot {
...
require(eq(shr(224, calldataload(ptr)), 0x05080000), "InvalidAccountCompressedFlag")
However, in ScrollVerifierHooks.sol, the check for the compressed flag is missing. Currently, nodeType
and nodeKey
(keyhash) are checked in ScrollVerifierHooks::walkTree
. And from nonce to posendonCodeHash
(160 bytes total) are checked against leafHash
in ScrollVerifierHooks::verifyAccountState
.
//contracts/scroll/ScrollVerifierHooks.sol
function walkTree(
bytes32 key,
bytes memory encodedProof,
bytes32 rootHash,
uint256 leafSize
) internal view returns (bytes32 keyHash, bytes32 h, bytes memory v, bool exists) {
...
} else if (nodeType == NODE_LEAF) {
if (v.length != leafSize) revert InvalidProof();
// NOTE: leafSize is >= 33
if (uint8(v[leafSize - 33]) != 32) revert InvalidProof(); // InvalidKeyPreimageLength
bytes32 temp;
assembly {
temp := mload(add(v, 33))
}
//@audit check for nodeType and nodeKey(keyhash), no check on compressed flag (starting offset v + 65)
|> if (temp == keyHash) {
assembly {
temp := mload(add(v, leafSize))
}
if (temp != key) revert InvalidProof(); // InvalidKeyPreimage
exists = true;
...
//contracts/scroll/ScrollVerifierHooks.sol
function verifyAccountState(
bytes32 stateRoot,
address account,
bytes memory encodedProof
) external view returns (bytes32 storageRoot) {
(bytes32 keyHash, bytes32 leafHash, bytes memory leaf, bool exists) = walkTree(
bytes20(account),
encodedProof,
stateRoot,
230
); // flags = 0x05080000
if (leafHash == 0) return NOT_A_CONTRACT;
bytes32 temp;
bytes32 amount;
bytes32 codeHash;
assembly {
//@audit the hash starts after compressed flag (starting at v + 65) so compressed flag is not checked against leafHash either.
|> temp := mload(add(leaf, 69)) // nonce||codesize||0
amount := mload(add(leaf, 101))
storageRoot := mload(add(leaf, 133))
codeHash := mload(add(leaf, 165))
}
bytes32 h = poseidonHash2(storageRoot, poseidonHash1(codeHash), 1280);
h = poseidonHash2(poseidonHash2(temp, amount, 1280), h, 1280);
assembly {
temp := mload(add(leaf, 197))
}
h = poseidonHash2(h, temp, 1280);
h = poseidonHash2(keyHash, h, 4);
if (leafHash != h) revert InvalidProof(); // InvalidAccountLeafNodeHash
if (codeHash == NULL_CODE_HASH || !exists) storageRoot = NOT_A_CONTRACT;
}
Impact
- Proofs with invalid compressed flag will be accepted. This violates the invariant: Only correct proofs are accepted. Invalid or unproven inputs are rejected.
- A prover can pass proofs with variations of flag values in current validation models. Such proofs will be rejected in scroll’s official validations. Potential for cross-system verification failures
- The ability to arbitrarily set compressed flags represents a form of controlled modification of proofs. Although this doesn’t currently return manipulated values in GatewayFetchTarget.sol, this might allow complex attack path in a client contract with custom business logic that derives
GatewayFetchTarget
.
Note that the compressed flag can be set arbitrarily in ScrollVerifier::verifyStorageValue
-> walkTree
flow in a similar way.
Recommended mitigation steps
Check the flag to be the exact expected value (0x05080000,0x01010000
).
Links to affected code
- ScrollVerifierHooks.sol#L120
- ScrollVerifierHooks.sol#L127
raffy (Unruggable) disputed and commented:
Similar to magic bytes, the compressed flags have no influence on the hashes utilized by the verifier.
Picodes (judge) decreased severity to Low and commented:
Ignoring the compressed flag is clearly mentioned here. The impact is the same as for magic bytes (basically multiple proofs are valid) so Low severity seems appropriate.
[08] OPFaultVerifier’s game time logic allows proof generation from future states
Note: At the judge’s request, this downgraded issue from the same warden has been included in this report for completeness.
Finding description and impact
Due to the permissionless game creation in OPfaultproof
, multiple games can be created at the same L1 timestamp asserting different l2 block number states. The only checks in the game creation flow is the l2 block is later than the latest anchor state (last confirmed) and no duplicated state creation.
OPFaultVerifier::_getGameTime
is vulnerable and doesn’t ensure a proper relationship with L2 block sequence. And allow proofs for a future state relative to the fetched context to be accepted and returned.
Proof of Concept
When _minAgeSec
is set to 0
. The context only fetches resolved games. And since there is only one canonical chain of resolved games. This is not vulnerable.
Vulnerable case: when _minAgeSec ≠0
, there could be multiple branches of games ongoing. OPFaultVerifier::_getGameTime
is based on game-created time and doesn’t correlate with the l2 block number of the state assertion, neither does it indicate the likelihood of correctness.
//contracts/op/OPFaultVerifier.sol
function _getGameTime(IDisputeGame g) internal view returns (uint256) {
|> return Timestamp.unwrap(_minAgeSec == 0 ? g.resolvedAt() : g.createdAt());
}
Flows:
OpFaultVerifier::getStorageValues -> _checkWindow -> _getGameTime
In _checkWindow
, the proof’s game creation time (_getGameTime(gameProxy)
) will be checked against the context’s game creation time (_getGameTime(gameProxy1)
). Due to the game creation time doesn’t reflect L2 block sequence, this check is invalid a game created at an earlier timestamp might represent a later blockchain state (l2 block number).
//contracts/op/OPFaultVerifier.sol
function getStorageValues(
bytes memory context,
GatewayRequest memory req,
bytes memory proof
) external view returns (bytes[] memory, uint8 exitCode) {
...
if (p.gameIndex != gameIndex1) {
(, , IDisputeGame gameProxy1) = _portal.disputeGameFactory().gameAtIndex(gameIndex1);
|> _checkWindow(_getGameTime(gameProxy1), _getGameTime(gameProxy));
}
...
Impact
getStorageValues
may actually allow proof stating a later blockchain state than context. A user/client requesting value or account with context that correspond to L2 block X, might actually receive answers based on block Y > X
.
Recommended mitigation steps
In OPFaultVerifier::_getGameTime
, when _minAgeSec ≠ 0
, consider using g.l2BlockNumber()
.
Links to affected code
OPFaultVerifier.sol#L93
raffy (Unruggable) disputed and commented:
I’m of the opinion that the unfinalized codepath (
_minAgeSec > 0
) is offered for testing and private use. The only guarantee it provides is that it accepts an unchallenged game of a sufficient age (relative to current time).For testing,
_minAgeSec = 1
is likely used for the newest data. A custom application may use 12 hours or so and only supply answers to games that have been verified offchain (e.g., skip fault canaries and games that disagree with the chain.)However, there likely exists a game that will satisfy any unfinalized onchain check. The main advantage of timestamp over
l2BlockNumber()
is that there is a current timestamp available onchain. The only usablel2BlockNumber()
anchor would be the last finalized game, which enforces a different kind of constraint:Δtimestamp <------------------> (finalized)-----------(unfinalized)---------(latest) <--------------------> Δl2BlockNumber
Picodes (judge) decreased severity to Low and commented:
Following the sponsor’s comment is convincing to me:
_minAgeSec > 0
is unfinalized by definition.
Thanks for judging. There might be a misunderstanding. I understand that
_minAgeSec > 0
is intended for unfinalized states.
- The vulnerability is
g.createdAt()
doesn’t necessarily correspond to a newer or order blockchain state, due to permissionless game creation.g.createdAt()
is L1 timestamp where a game creation tx settles on chain.function initialize() public payable virtual { ... // Set the game's starting timestamp createdAt = Timestamp.wrap(uint64(block.timestamp));
Edge case:
gameIdx
1 is created at block 1000, asserting l2 block number 10000.gameIdx
2 is created at block 1001, asserting l2 block number 9990.Note: it’s also possible
gameIdx1
and 2 are created at the same l1 block, butgameIdx
2 is asserting an earlier L2 state.
- The problem is when comparing
g.createdAt()
between the context and the proof in_checkWindow
, the comparison is not meaningful in the above edge cases. Suppose when_checkWindow
ensures Proof’sgameIndex
is created in range [context.game.createdAt - window, context.game.createdAt
], the actual L2 block state of the proof might be of any unfinalized L2 block.g.l2BlockNumber()
suggested in recommendation is not anchor state.g.l2BlockNumber()
is the l2 block number of dispute, that the game is asserting. So usingg.l2BlockNumber()
in_checkWindow
would at least ensure the age of the proof’s l2 blockchain state is within a range that the client application accepts./// @notice The l2BlockNumber of the disputed output root in the `L2OutputOracle`. function l2BlockNumber() public pure returns (uint256 l2BlockNumber_) { l2BlockNumber_ = _getArgUint256(0x54); }
Since
createdAt
usesblock.timestamp
we can compute on L1 exactly how long the game as been visible to the network. I believe time is the only defense against an unfinalized game that hasn’t been challenged successfully.Note:
- these are the 2 assertions made by GameFinder.
_isGameUsable()
.- dispute games are added sequentially and
createdAt
can’t be spoofed.Unlike
block.timestamp
, there is no mechanism to know the latest L2 block. As my diagram shows above, the contract only knows the block from the latest finalized game. Whether a new game advances the rollup more or less blocks isn’t a concern. The only question is whether the game is valid._getArgUint256
(0x54) from thel2BlockNumber()
snippet above corresponds toextraData
which is supplied by the game creator.
_checkWindow()
ensures that the game supplied by the gateway is within some time-like bound of the game the contract was able to determine is the “latest” using only onchain data.
- The contract finds a game that’s verifiably
minAgeSec
old.- The gateway supplies a response for that game or an game that’s even older than that, but within the window.
In a trustworthy setting:
- The contract finds a game that’s verifiably
minAgeSec
old.- The gateway receives a request for that game.
- The gateway notices the data for that game is invalid and uses an older game. -The contract verifies that the supplied response is within the window.
In a malicious setting:
- The actor proposes a malicious game.
- The actor waits
minAgeSec
.- The actor can supply proofs for their root.
As stated above, I believe the only defense against this is time.
Thanks for the detailed comments. I appreciate your time! I understand that when
minAgeSec!=0
, the game is unfinalized. The finding isn’t about finding a finalized game.The key issues with
createdAt
are:Inconsistent Security Assumptions for Client Applications:
_checkWindow()
serves fundamentally different purposes when used withcreatedAt
,compared to current verifier contracts for other chains. For example, in LineaVerifier.sol,_checkWindow
(l2BlockNumber1
,p.l2BlockNumber
) allows client applications to reliably validate blockchain state within an acceptable age range. If a proof references an L2 block number outside this range, clients can confidently reject the storage values as expired.However, in OPFaultVerifier.sol,
createdAt
has no direct relationship to the blockchain state being proven. The timestamp when a game creation transaction is settled provides no meaningful information about the age of the blockchain state being verified. This puts client applications at risk of accepting storage values from blockchain states that may be irrelevant or invalid for their needs.Unreliable State Mapping in Permissionless Systems: The permissionless nature of OP fault proof game creation means
createdAt
cannot be trusted to accurately reflect blockchain state progression. As the edge cases I showed in my previous comment, the settlement timestamp ofcreatedAt
tx shouldn’t be used from a client app’s perspective. Only in a permissioned system with trusted actor assumptions would a created timestamp or batch indexes reliably indicate the age of the blockchain state.Regarding
l2BlockNumber
: This value is provided alongside therootClaimby
the game creator and corresponds to the claimed post-state. It effectively represents the specific blockchain state that the proof from the game corresponds to, making it a valid reference point forgatewayproof
.Also,
GameFinder._isGameUsable()
is a helper for OPFaultVerifier. The vulnerability ofcreatedAt
applies to both.
@oakcobalt - I understand your point that for a consumer having a guarantee about the L2 block may be better in certain cases; however, here the check is clearly on the L1 timestamps, which makes sense from a security standpoint. So I think Low is more appropriate than Medium severity.
[09] Some invalid proof will not be reverted due to vulnerable control flow in ScrollVerifierHooks
Note: At the judge’s request, this downgraded issue from the same warden has been included in this report for completeness.
Finding description and impact
ScrollVerifierHooks
allows many variations of incorrect proof to pass as long as leafHash == 0
. Invalid proofs with incorrect leaf node values will be accepted.
This also violates the invariant: Only correct proofs are accepted. Invalid or unproven inputs are rejected.
Proof of Concept
The vulnerability here is in scrollVerifierHooks::verifyAccountState
/verifyStroageValues
, when leafHash == 0
, the check on the leafHash == H(leafNode)
will be skipped, which allows the proof to contain leafNode
of many variations of value.
In ScrollVerifierHooks::walkTree
, when a node type is NODE_LEAF
, the leaf node’s hash will not be checked. The check is deferred to verifyAccountState
/verifyStorageValues
.
This means a vulnerable case where the key derives into an empty leaf hash, while the provided proof is NODE_LEAF
type. There will be no check on H(leafNode)
. As long as the provided leaf node from the proof contains the same/partial-matched keyHash
. WalkTree
will return (keyHash = original keyhash, leafHash = bytes32(0), leaf = incorrect leaf node value, exists = true
).
//contracts/scroll/ScrollVerifierHooks.sol
function walkTree(
bytes32 key,
bytes memory encodedProof,
bytes32 rootHash,
uint256 leafSize
) internal view returns (bytes32 keyHash, bytes32 h, bytes memory v, bool exists) {
...
} else if (nodeType == NODE_LEAF) {
if (v.length != leafSize) revert InvalidProof();
// NOTE: leafSize is >= 33
if (uint8(v[leafSize - 33]) != 32) revert InvalidProof(); // InvalidKeyPreimageLength
bytes32 temp;
assembly {
temp := mload(add(v, 33))
}
if (temp == keyHash) {
assembly {
temp := mload(add(v, leafSize))
}
if (temp != key) revert InvalidProof(); // InvalidKeyPreimage
//@audit-info note: no checks on H(leafNode) here
|> exists = true;
} else {
// If the trie does not contain a value for key, the returned proof contains all
// nodes of the longest existing prefix of the key (at least the root node), ending
// with the node that proves the absence of the key.
bytes32 p = bytes32((1 << i) - 1); // prefix mask
if ((temp & p) != (keyHash & p)) revert InvalidProof();
// this is a proof for a different value that traverses to the same place
//@audit-info note: no checks on H(leafNode) here
|> keyHash = temp;
}
//@audit-info note: this directly returns walkTree to verifyAccountState / verifyStorageValue without H(leafNode) check.
|> break;
...
In verifyAccountState
/verifyStorageValue
, due to returned leafHash = bytes32(0)
, this early return 0 value without performing checks on the leaf node value (bytes memory leaf). Although the returned value is still correct ( bytes32 0
or NOT_A_CONTRACT
) for non-existent data, the supplied proof has an incorrect leaf node value.
//contracts/scroll/ScrollVerifierHooks.sol
function verifyAccountState(
bytes32 stateRoot,
address account,
bytes memory encodedProof
) external view returns (bytes32 storageRoot) {
(bytes32 keyHash, bytes32 leafHash, bytes memory leaf, bool exists) = walkTree(
bytes20(account),
encodedProof,
stateRoot,
230
); // flags = 0x05080000
//@audit When the returned leafHash == 0, there is no check on leaf value. Proof with incorrect leaf value will not be reverted.
|> if (leafHash == 0) return NOT_A_CONTRACT;
...
Impact
Some invalid proof will not be rejected. This violates the invariant: Only correct proofs are accepted. Invalid or unproven inputs are rejected.
A malicious prover can submit modified variations of proof to pass validation. Risk of proof validation inconsistency across different verification contexts. Proofs validated by this contract might be rejected off-chain or by other scroll verification processes.
A client contract deriving GatewayFetchTarget.sol will trust a deterministic proof validation system. When multiple invalid proofs can pass validation, although currently doesn’t directly allow manipulation of the return data, this may open up complex attack vectors based on the client contract’s business logic.
Proof of Concept
Modifying the leaf node from the original proof will still pass the test. For simplicity, the poc modifies an empty leaf node: 050..0
into 05FF..F
. There are other variations as well such as modifying 050..0
into 04 + keyhash + arbitrary byte filler + 20 + keypreiamge
.
Run bun test: test/gateway/scroll.test.ts
:
test/gateway/scroll.test.ts:
Original proof 0500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002d5448495320495320534f4d45204d4147494320425954455320464f5220534d54206d317252586750327870444900000000000000000000000000000000000000
Modified Proof: 05FFFFFFFFFFFFFFFF0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002d5448495320495320534f4d45204d4147494320425954455320464f5220534d54206d317252586750327870444900000000000000000000000000000000000000
✓ MAINNET => SCROLL > slot = 101 [1517.74ms]
1 pass
0 fail
1 expect() calls
Ran 1 tests across 1 files. [5.01s]
test/gateway/tests.ts:
...
export function runSlotDataTests(
r: Contract,
pointer = false,
skipZero = false
) {
+ test('slot = 101', async () => {
+ expect(await r.readSlot(101,opts)).toEqual(0n);
+ });
+ //comment out other tests
src/gateway.ts:
export class Gateway<R extends Rollup> extends EZCCIP {
...
constructor(readonly rollup: R) {
...
return this.callLRU.cache(hash, async () => {
const state = await commit.prover.evalDecoded(req);
const proofSeq = await commit.prover.prove(state.needs);
+ const proofLen= proofSeq.proofs.length;
+ // Get the last proof (HexString)
+ let lastProof = proofSeq.proofs[proofLen - 1];
+ // Ensure the proof is long enough
+ if (lastProof.length >= 256) {
+ console.log("Original proof", lastProof.slice(-256));
+ // Manipulate proofs
+ const start = -254;
+ const end = -238;
+ // modified slice
+ const modifiedSlice = 'FF'.repeat((end - start) / 2);
+ lastProof = lastProof.slice(0, lastProof.length + start) + modifiedSlice + lastProof.slice(lastProof.length + end);
+ console.log("Modified Proof:", lastProof.slice(-256));
+ // Update the proof in proofSeq
+ proofSeq.proofs[proofLen - 1] = lastProof;
+ }
return getBytes(this.rollup.encodeWitness(commit, proofSeq));
});
Recommended mitigation steps
Consider moving the leafhash
and H(leafnode)
check into walkTree
’s else if (nodeType == NODE_LEAF
) branch.
Links to affected code
- ScrollVerifierHooks.sol#L48
- ScrollVerifierHooks.sol#L81
raffy (Unruggable) disputed and commented:
Note: proving zeros was a change we made during the initial audit. It was originally disabled (like the Scroll yul version) and enabled after we fixed the issue.
I will review this later to make sure I understand but my initial take:
Similar to magic bytes and compressed flags, the value of the leaf hash doesn’t matter if the tree traversal is valid and ends with a zero. The hash check would only be ceremonial.
Picodes (judge) decreased severity to Low and commented:
I tend to agree with the sponsor here. If the returned value is 0, it’s not a contract, so what there is in the storage doesn’t matter anyway; the impact remains minimal.
Mitigation Review
Introduction
Following the C4 audit, warden peakbolt reviewed the mitigations for sponsor addressed issues from the original audit and also completed an additional full review of the codebase. Details are appended below for each review session.
Mitigation Review Summary
During the mitigation review, it was confirmed that all in-scope findings were mitigated. The table below provides details regarding the status of the in-scope findings from the original audit.
Original Issue | Status | Mitigation URL |
---|---|---|
[M-01] zkTrie maximum depth limit is not enforced in Scroll |
Mitigation confirmed | @0f9d4bfc109802db252590e249c9676ddf383368 |
[M-02] OPFaultVerifier ingests games that resolve incorrectly |
Mitigation confirmed | @92c9fbb36dee3a11e0cff1096ce4958ac1491ae6 |
Additional finding: [F-10] NitroVerifier lacks proper node validation when retrieving latest created nodes in unfinalized state |
Mitigation confirmed | @daf1325791356512940511a2bdc54e28d32bba37 |
Mitigation of [M-01]
Confirmed issue is resolved with the added MAX_TRIE_DEPTH
(248) check that ensures provided proof does not exceed the max depth of 248
.
Mitigation of [M-02]
The fix addresses the reported issue by rejecting any DisputeGame
that is blacklisted by the Guardian when the game is resolved incorrectly.
However, the fix fails to account for the DISPUTE_GAME_FINALITY_DELAY_SECONDS
in OptimismPortal2.sol#L67. This is known as the Air-gap period, where a game is only considered finalized when it is resolved for at least DISPUTE_GAME_FINALITY_DELAY_SECONDS
. The purpose of it is to provide sufficient time for the Guardian to blacklist any incorrectly resolved games.
Furthermore, this air-gap period check extends beyond withdrawal finalization as it is used in AnchorStateRegisty#L203-L248, to determine if the game claim is finalized in a generic manner.
/// @notice Returns whether a game is finalized.
/// @param _game The game to check.
/// @return Whether the game is finalized.
function isGameFinalized(IDisputeGame _game) public view returns (bool) {
// Game must be resolved.
if (!isGameResolved(_game)) {
return false;
}
// Game must be beyond the airgap period.
if (!isGameAirgapped(_game)) {
return false;
}
return true;
}
/// @notice Returns whether a game's root claim is valid.
/// @param _game The game to check.
/// @return Whether the game's root claim is valid.
function isGameClaimValid(IDisputeGame _game) public view returns (bool) {
// Game must be a proper game.
bool properGame = isGameProper(_game);
if (!properGame) {
return false;
}
// Must be respected.
bool respected = isGameRespected(_game);
if (!respected) {
return false;
}
// Game must be finalized.
bool finalized = isGameFinalized(_game);
if (!finalized) {
return false;
}
// Game must be resolved in favor of the defender.
if (_game.status() != GameStatus.DEFENDER_WINS) {
return false;
}
return true;
}
Previous recommendation
It is recommended to use AnchorStateRegistry.isGameClaimValid()
for finalized mode and also adapt it for unfinalized mode (to check game is proper, respected and not rejected). This will ensure that the verifier mirrors the OP’s game verification.
Latest recommendation
As the previous recommendations above are based on new contracts that are not yet deployed, it will be better to mirror the current OP’s game verification based on their withdrawal for the deployed OptimismPortal.
Specifically, the new recommendation is to add the following checks for finalized mode (minAgeSec == 0
),
- Check game was not created before
portal.respectedGameTypeUpdatedAt
. - Ensure that the game has been resolved for at least
DISPUTE_GAME_FINALITY_DELAY_SECONDS
(passed airgapped period).
These checks currently exists in checkWithdrawal()
in OptimismPortal2.sol#L505-L518
// The game must have been created after `respectedGameTypeUpdatedAt`. This is to prevent users from creating
// invalid disputes against a deployed game type while the off-chain challenge agents are not watching.
require(
createdAt >= respectedGameTypeUpdatedAt,
"OptimismPortal: dispute game created before respected game type was updated"
);
// Before a withdrawal can be finalized, the dispute game it was proven against must have been
// resolved for at least `DISPUTE_GAME_FINALITY_DELAY_SECONDS`. This is to allow for manual
// intervention in the event that a dispute game is resolved incorrectly.
require(
block.timestamp - disputeGameProxy.resolvedAt().raw() > DISPUTE_GAME_FINALITY_DELAY_SECONDS,
"OptimismPortal: output proposal in air-gap"
);
Unruggable
Fixed here.
Zenith
Resolved by adding finalized mode validation to check that the selected game is finalized as follows:
(created > respectedGameTypeUpdatedAt &&
status == DEFENDER_WINS &&
block.timestamp - gameProxy.resolvedAt()) > DISPUTE_GAME_FINALITY_DELAY_SECONDS)
Note: in finalized mode, only games with respected game type is considered valid (gameTypeBitMask == 0
). Also, it only considers games with createdAt > respectedGameTypeUpdatedAt
as games with createdAt <= respectedGameTypeUpdatedAt
are considered retired in OP now.
clowestab (Unruggable) commented:
Resolution for outstanding F-13 recommendations as of 31st January 2025: @92c9fbb.
Mitigation of additional finding [F-10]
First review
The fix for F-10 does not fully address the issue as stakerCount
could also include zombies (stakers that lost in a challenge). That is because when stakers lost a challenge, they are turned into
zombies and not removed from stakerCount
immediately. That means it is possible for rejected nodes to have stakerCount > 0
.
In Nitro’s code, the definition of no stakers refers to no zombie stakers. This is evident from RollupUserLogic.sol
where the rejectNextNode()
checks for zero non-zombies (active stakers) to proceed with the node rejection.
function rejectNextNode(address stakerAddress) external onlyValidator whenNotPaused {
..
// Verify that no staker is staked on this node
require(
firstUnresolvedNode_.stakerCount == countStakedZombies(firstUnresolvedNodeNum),
"HAS_STAKERS"
);
Recommendation
However, my recommendation is not to use stakerCount
but instead rely on firstUnresolvedNode()
and latestConfirmedNode()
to skip rejected nodes in the following manner:
- Trasverse backward from the
latestNodeCreated()
to find the node that satistify_minBlocks
. - If the next iteration goes past the
firstUnresolvedNode()
, it should then continue the traversing usinglatestConfirmedNode()
and traverse usingnode.prevNum
to transfer through the confirmed nodes (just like finalized mode).
Doing so will allow us to skip the rejected nodes (below) in an efficient manner.
If we look at this diagram from Arbitrum we can see two scenarios of rejected nodes.
- Rejected nodes (104, 105) that are after
latestConfirmedNode()
- It will skip these by jumping fromfirstUnresolvedNode()
tolatestConfirmedNode()
, as it will skip the nodes that had been resolved and rejected by the protocol. - Rejected nodes (101) that are before
latestConfirmedNode()
- These are skipped simply by trasversing confirmed nodes like finalized mode, and not using descending node number. We need to consider this case as it is possible to select a node that is older than thelatestConfirmed()
node.
Note: unresolved node 111 will eventually be rejected by the protocol, as it is a child of rejected node 104. That means we could either treat it as unresolved and allow it to be selected by getLatestContext()
or try to anticipate the rejection and not allow it to be selected.
My recommended solution treats node 111 as unresolved and allow it to be selected due to simplicity and respecting the protocol’s state. There is nothing wrong to reject node 111, though it is more complex as there are no direct way to check if the parent/ancestor node is rejected. Furthermore, there are also such pending rejection scenarios, such as deadline and rejection of child/decedent nodes, so its not easy to address them all.
Unruggable
Fixed here.
Zenith
Resolved with a new _isNodeUsable()
that checks specified node is not rejected with the check node.stakerCount > _rollup.countStakedZombies(index)
. When node.stakerCount == _rollup.countStakedZombies(index)
, it means that the node has zero active stakers, which by definition is a rejected or pending rejection node.
Additional Review Summary
During the additional review of the codebase, 3 findings were surfaced, 1 Medium severity and 2 Low severity. The table below provides details regarding the status of those findings from the additional review.
Issue | Status | Mitigation URL |
---|---|---|
[M-01] OPVerifier incorrectly verify state with unfinalized nodes when minAgeSec == 0 |
Mitigation confirmed | @daf1325791356512940511a2bdc54e28d32bba37 |
[L-01] MerkleTrie should reject un-used proofs |
Mitigation confirmed | @daf1325791356512940511a2bdc54e28d32bba37 |
[L-02] ScrollVerifierHooks should ensure there are no un-used proofs |
Mitigation confirmed | @daf1325791356512940511a2bdc54e28d32bba37 |
[M-01] OPVerifier
incorrectly verify state with unfinalized nodes when minAgeSec == 0
Lines of Code
Description
Verifiers with minAgeSec == 0
(finalized mode) will perform the state verification based on finalized nodes/blocks. However, OPVerifier
’s finalized mode incorrectly uses the latest proposed nodes, which are not finalized yet.
That is because OP (pre-FaultProof version) has a finalizationPeriodSeconds
field that defines the “Number of seconds that a proposal must be available to challenge before it is considered finalized by the OptimismPortal contract.” as stated in the docs.
This is exposed as a getter isOutputFinalized()
in OptimismPortal.sol#L421-L428
, and is also utilized when finalizing withdrawals.
In the code below, we can see that getLatestContext()
uses the latest proposed output node from latestOutputIndex()
when minAgeSec == 0
. As OP block time is 2 seconds, the latest output node is unfinalized as it would not have passed the finalization period (typically 7 days).
function getLatestContext() external view returns (bytes memory) {
uint256 i = _oracle.latestOutputIndex();
//@audit when `_minAgeSec == 0` it uses the the latest proposed output node, which is unfinalized
uint256 t = block.timestamp - _minAgeSec;
while (true) {
Types.OutputProposal memory output = _oracle.getL2Output(i);
if (output.timestamp <= t) {
return abi.encode(i);
}
if (i == 0) break;
--i;
}
revert('OP: no output');
}
In addition, getStorageValues()
does not ensure that the output node given by the gateway is finalized, when p.outputIndex != outputIndex1
.
function getStorageValues(
bytes memory context,
GatewayRequest memory req,
bytes memory proof
) external view returns (bytes[] memory, uint8 exitCode) {
uint256 outputIndex1 = abi.decode(context, (uint256));
GatewayProof memory p = abi.decode(proof, (GatewayProof));
Types.OutputProposal memory output = _oracle.getL2Output(p.outputIndex);
if (p.outputIndex != outputIndex1) {
//@audit missing check to ensure that the output node is finalized when `minAgeSec == 0`
Types.OutputProposal memory output1 = _oracle.getL2Output(
outputIndex1
);
_checkWindow(output1.timestamp, output.timestamp);
}
Recommendation
Update OPVerifier
to perform the following when minAgeSec == 0
,
getLatestContext()
should look for the latest node that is finalized by checking againstisOutputFinalized()
.getStorageValue()
should check that the node provided by gateway is finalized whenp.outputIndex != outputIndex1
.
Unruggable
Fixed here.
[M-01] has been resolved utilising a ‘Game Finder’ approach (OPOutputFinder
) similar to that used in the OPFaultVerifier
. This facilitates an efficient search of submitted outputs giving consideration to both finalised and unfinalised Gateway/Verifier operation. It also minimises the need for excessive RPC calls within the Javascript implementation.
Zenith
Resolved with a new OPOutputFinder
for OPVerifier
, which will search for the latest node with timestamp <= minAgeSec
. For minAgeSec == 0
, it uses oracle.finalizationPeriodSeconds()
, which is essentially searching for the latest finalized node.
Note: for finalized mode, OPVerifier.getStorageValues()
does not need an explicit check to check the gateway provided node is finalized, as it relies on _checkWindow()
to reject unfinalized nodes. _checkWindow()
will reject nodes after latest finalized node, which are by definition unfinalized. In addition, in OP (pre-FaultGame), rejected nodes are deleted so nodes older than the latest finalized node are always finalized.
[L-01] MerkleTrie
should reject un-used proofs
Lines of Code
Description
In MerkleTrie
, the function _walkNodePath()
retrieves the value node by walking through the proof array with the key and verify its inclusion in the proof.
When currentKeyIndex == key.length
, it terminates the loop and returns, as it has reached the end of the key and found the value node for the key (as show below).
However, it fails to check that there are no more un-used proof elements in the proof array before terminating in that situation. That is because the proof is considered invalid if it contains more proof elements than what is required to verify the key and value node.
This differs from a recent version of OP’s MerkleTrie.sol#L111-L112
, where it will check that it has reached the end of the proof array when the value node is found.
function _walkNodePath(
TrieNode[] memory _proof,
bytes memory _key,
bytes32 _root
)
...
if (currentNode.decoded.length == BRANCH_NODE_LENGTH) {
if (currentKeyIndex == key.length) {
// We've hit the end of the key
// meaning the value should be within this branch node.
break;
} else {
Recommendation
Consider adding the i == proof.length - 1
check in _walkNodePath()
.
function _walkNodePath(
TrieNode[] memory _proof,
bytes memory _key,
bytes32 _root
)
...
if (currentNode.decoded.length == BRANCH_NODE_LENGTH) {
if (currentKeyIndex == key.length) {
// We've hit the end of the key
// meaning the value should be within this branch node.
+ // Extra proof elements are not allowed.
+ require(i == proof.length - 1, "MerkleTrie: value node must be last node in proof (branch)");
break;
} else {
Unruggable
Fixed here.
Zenith
Resolved as per recommendations.
[L-02] ScrollVerifierHooks
should ensure there are no un-used proofs
Lines of Code
ScrollVerifierHooks.sol#L118-L141
Description
The function walkTree()
in ScrollVerifierHooks
will walk down the proof[]
array to search for the leaf node and retrieve the value for the specific key. It also performs the inclusion proving while iterating through the proofs.
When nodeType == NODE_LEAF
, it means that it has reached the leaf node, which means that it could be (1) the node that has the value for the key or (2) the node of the longest existing prefix of the key for proving the absence of the key.
However, it does not check that there are no more extra un-used proof in proof[]
, which would technically make it an invalid proof as a whole, even though a subset of it was valid. This check is performed in Scroll’s ZkTrieVerifier
.
function walkTree(
...
} else if (nodeType == NODE_LEAF) {
if (v.length != leafSize) revert InvalidProof();
// NOTE: leafSize is >= 33
if (uint8(v[leafSize - 33]) != 32) revert InvalidProof(); // InvalidKeyPreimageLength
bytes32 temp;
assembly {
temp := mload(add(v, 33))
}
if (temp == keyHash) {
assembly {
temp := mload(add(v, leafSize))
}
if (temp != key) revert InvalidProof(); // InvalidKeyPreimage
exists = true;
} else {
// If the trie does not contain a value for key, the returned proof contains all
// nodes of the longest existing prefix of the key (at least the root node), ending
// with the node that proves the absence of the key.
bytes32 p = bytes32((1 << i) - 1); // prefix mask
if ((temp & p) != (keyHash & p)) revert InvalidProof();
// this is a proof for a different value that traverses to the same place
keyHash = temp;
}
break;
}
Recommendation
Update walkTree()
to reject the proof if there are un-used proof elements after the leaf node and magic bytes.
function walkTree(
...
} else if (nodeType == NODE_LEAF) {
if (v.length != leafSize) revert InvalidProof();
// NOTE: leafSize is >= 33
if (uint8(v[leafSize - 33]) != 32) revert InvalidProof(); // InvalidKeyPreimageLength
+ // Proof is invalid if there are un-used proof elements after this leaf node and magic bytes
+ if (keccak256(proof[i + 1]) != keccak256("THIS IS SOME MAGIC BYTES FOR SMT m1rRXgP2xpDI")) revert InvalidProof();
+ if (proof.length - 1 != i + 1) revert InvalidProof();
bytes32 temp;
assembly {
temp := mload(add(v, 33))
}
if (temp == keyHash) {
assembly {
temp := mload(add(v, leafSize))
}
if (temp != key) revert InvalidProof(); // InvalidKeyPreimage
exists = true;
} else {
// If the trie does not contain a value for key, the returned proof contains all
// nodes of the longest existing prefix of the key (at least the root node), ending
// with the node that proves the absence of the key.
bytes32 p = bytes32((1 << i) - 1); // prefix mask
if ((temp & p) != (keyHash & p)) revert InvalidProof();
// this is a proof for a different value that traverses to the same place
keyHash = temp;
}
break;
}
Unruggable
Fixed here.
Zenith
Resolved as per recommendations.
clowestab (Unruggable) commented:
[L-01] and [L-02] have been resolved for strict correctness.
[M-01] has been resolved utilising a ‘Game Finder’ approach (OPOutputFinder) similar to that used in the OPFaultVerifier. This facilitates an efficient search of submitted outputs giving consideration to both finalised and unfinalised Gateway/Verifier operation. It also minimises the need for excessive RPC calls within the Javascript implementation.
The OP portal and oracle contracts provide access to data pertaining to the submission of outputs and finalisation, however naive usage of this data assumes efficient, ‘correct’ proposer operation.
Whilst these changes touch numerous files, most of these changes pertain to changes to configuration passing and are simple in nature.
Fixes: https://github.com/unruggable-labs/unruggable-gateways/commits/audit-fixes/
Disclosures
C4 is an open organization governed by participants in the community.
C4 audits incentivize the discovery of exploits, findings, and bugs in smart contracts. Security researchers are rewarded at an increasing rate for finding higher-risk issues. Audit submissions are judged by a knowledgeable security researcher and disclosed to sponsoring developers. C4 does not conduct formal verification regarding the provided code but instead provides final verification.
C4 does not provide any guarantee or warranty regarding the security of this project. All smart contract software should be used at the sole risk and responsibility of users.