Taiko
Findings & Analysis Report
2024-04-26
Table of contents
- Summary
- Scope
- Severity Criteria
-
- [H-01] Gas issuance is inflated and will halt the chain or lead to incorrect base fee
- [H-02] Validity and contests bond ca be incorrectly burned for the correct and ultimately verified transition
- [H-03] Users will never be able to withdraw their claimed airdrop fully in ERC20Airdrop2.sol contract
- [H-04] Taiko L1 - Proposer can maliciously cause loss of funds by forcing someone else to pay prover’s fee
- [H-05] Signatures can be replayed in
withdraw()
to withdraw more tokens than the user originally intended.
-
- [M-01] There is no slippage check for the eth deposits processing in the
LibDepositing.processDeposits
- [M-02] The top tier prover can not re-prove
- [M-03] retryMessage unable to handle edge cases.
- [M-04] A recalled ERC20 bridge transfer can lock tokens in the bridge
- [M-05] Bridge watcher can forge arbitrary message and drain bridge
- [M-06] First block proposer check in the
LibProposing._isProposerPermitted
function is errorneous - [M-07] Incorrect _Essentialinit() function is used in TaikoToken making snapshooter devoid of calling snapshot()
- [M-08] Bridged tokens would be lost if sender and receiver are contracts that don’t implement fallback/receive
- [M-09] LibProposing:proposeBlock allows blocks with a zero parentMetaHash to be proposed after the genesis block and avoid parent block verification
- [M-10] The decision to return the liveness bond depends solely on the last guardian
- [M-11] Proposers would choose to avoid higher tier by exploiting non-randomness of parameter used in getMinTier()
- [M-12] Invocation delays are not honoured when protocol unpauses
- [M-13] Taiko SGX Attestation - Improper validation in certchain decoding
- [M-14] Malicious caller of
processMessage()
can pocket the fee while forcingexcessivelySafeCall()
to fail
- [M-01] There is no slippage check for the eth deposits processing in the
-
Low Risk and Non-Critical Issues
- L-01 Consider initializing ContextUpgradeable contract by calling _Contextinit() in TaikoToken.sol
- L-02 _ERC1155Receiverinit() not initialized in ERC1155Vault
- L-03 If amountUnlocked in TimelockTokenPool is less than 1e18, rounding down occurs
- L-04 sendSignal() calls can be spammed by attacker to relayer
- L-05 Add “Zero if owner will process themself” comment to gasLimit instead of fee
- L-06 Bridge integration issues with swapping protocols
- L-07 sendMessage() does not check if STATUS is equal to NEW
- L-08 Protocol does not refund extra ETH but implements strict check
- L-09 If a message is suspended before processMessage() is called, the ERC20 tokens on the source chain and Ether are not refunded.
- L-10 User loses all Ether if their address is blacklisted on canonical token
- L-11 onMessageInvocation checks in _invokeMessageCall() can be bypassed to call arbitrary function from Bridge contract
- L-12 Consider reading return value from snapshot() function
- L-13 One off error in block sync threshold check to sync chain data
- L-14 One-off error when evaluating deposits to process with the ring buffer size
- R-01 Consider implementing changeBridgedToken() and btokenBlacklist for ERC721Vault and ERC1155Vault
- R-02 Instead of passing an empty string for the data parameter in NFT vaults on token transfers, allow users to supply data
- R-03 Use named imports to improve readability of the code and avoid polluting the global namespace
- N-01 Avoid hardcoding data in BridgedERC1155
- N-02 Missing source()/canonical() function on BridgedERC115 contract
- N-03 Using unchecked arithmetic in for loops is handled by solc compiler 0.8.22 onwards
- N-04 Typo in comment in Bytes.sol
- N-05 Incorrect comment regarding gasLimit in processMessage()
- N-06 Use require instead of assert
- N-07 Incorrect natspec comment for proveMessageReceived()
- N-08 Missing address(0) check for USDC in USDCAdapter
- N-09 srcToken and srcChainId is not updated on old token after migration through changeBridgedToken()
- N-10 MerkleClaimable does not check if claimStart is less than claimEnd
- N-11 Consider reading return value from snapshot() function
- N-12 Guardian proof that is never fully approved by minGuardians is never deleted
- N-13 Consider making the TIMELOCKADMINROLE undergo a delay when transferring the admin role
-
- 1. Introduction
- 2. Architecture and protocol overview
- 3. Scope Contracts
- 4. Codebase Analysis
- 5. Economic Model Analysis
- 6. Architecture Business Logic
- 7. Representation of Risk Model
- 8. Architecture Recommendations
- 9. Learning And Insights
- 10. Conclusion
- 11. Message For Taiko Team
- 12. Time Spent
- 13. Refrences
- Disclosures
Overview
About C4
Code4rena (C4) is an open organization consisting of security researchers, auditors, developers, and individuals with domain expertise in smart contracts.
A C4 audit is an event in which community participants, referred to as Wardens, review, audit, or analyze smart contract logic in exchange for a bounty provided by sponsoring projects.
During the audit outlined in this document, C4 conducted an analysis of the Taiko smart contract system written in Solidity. The audit took place between March 6—March 27 2024.
Wardens
74 Wardens contributed reports to Taiko:
- monrel
- Shield (Viraz, 0xA5DF, Dravee, and Udsen)
- t0x1c
- zzebra83
- MrPotatoMagic
- ladboy233
- joaovwfreire
- alexfilippov314
- Tendency
- Aymen0909
- pa6kuda
- t4sk
- mojito_auditor
- lightoasis
- 0xleadwizard
- wangxx2026
- josephdara
- blockdev
- Sathish9098
- Mahi_Vasisth
- imare
- Limbooo
- kaveyjoe
- Myd
- yongskiws
- fouzantanveer
- 0xepley
- hassanshakeel13
- popeye
- aariiif
- roguereggiant
- Fassi_Security (bronze_pickaxe and mxuse)
- albahaca
- DadeKuma
- hunter_w3b
- zabihullahazadzoi
- 0x11singh99
- slvDev
- pfapostol
- hihen
- grearlake
- dharma09
- 0xAnah
- IllIllI
- iamandreiski
- lanrebayode77
- cheatc0d3
- clara
- JCK
- foxb868
- pavankv
- rjs
- sxima
- Pechenite (Bozho and radev_sw)
- oualidpro
- LinKenji
- 0xbrett8571
- emerald7017
- Masamune
- Kalyan-Singh
- n1punp
- Auditor2947
- SAQ
- SY_S
- SM3_SS
- unique
- 0xhacksmithh
- K42
- caglankaan
This audit was judged by 0xean.
Final report assembled by PaperParachute.
Summary
The C4 analysis yielded an aggregated total of 19 unique vulnerabilities. Of these vulnerabilities, 5 received a risk rating in the category of HIGH severity and 14 received a risk rating in the category of MEDIUM severity.
Additionally, C4 analysis included 33 reports detailing issues with a risk rating of LOW severity or non-critical. There were also 28 reports recommending gas optimizations.
All of the issues presented here are linked back to their original finding.
Scope
The code under review can be found within the C4 Taiko repository, and is composed of 80 smart contracts written in the Solidity programming language and includes 7442 lines of Solidity code.
Severity Criteria
C4 assesses the severity of disclosed vulnerabilities based on three primary risk categories: high, medium, and low/non-critical.
High-level considerations for vulnerabilities span the following key areas when conducting assessments:
- Malicious Input Handling
- Escalation of privileges
- Arithmetic
- Gas use
For more information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website, specifically our section on Severity Categorization.
High Risk Findings (5)
[H-01] Gas issuance is inflated and will halt the chain or lead to incorrect base fee
Submitted by monrel
The base fee calculation in the anchor()
function is incorrect. Issuance is over inflated and will either lead to the chain halting or a severely deflated base fee.
Proof of Concept
We calculate the 1559 base fee and compare it to block.basefee
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L140-L143
(basefee, gasExcess) = _calc1559BaseFee(config, _l1BlockId, _parentGasUsed);
if (!skipFeeCheck() && block.basefee != basefee) {
revert L2_BASEFEE_MISMATCH();
But the calculation is incorrect:
if (gasExcess > 0) {
// We always add the gas used by parent block to the gas excess
// value as this has already happened
uint256 excess = uint256(gasExcess) + _parentGasUsed;
// Calculate how much more gas to issue to offset gas excess.
// after each L1 block time, config.gasTarget more gas is issued,
// the gas excess will be reduced accordingly.
// Note that when lastSyncedBlock is zero, we skip this step
// because that means this is the first time calculating the basefee
// and the difference between the L1 height would be extremely big,
// reverting the initial gas excess value back to 0.
uint256 numL1Blocks;
if (lastSyncedBlock > 0 && _l1BlockId > lastSyncedBlock) {
numL1Blocks = _l1BlockId - lastSyncedBlock;
}
if (numL1Blocks > 0) {
uint256 issuance = numL1Blocks * _config.gasTargetPerL1Block;
excess = excess > issuance ? excess - issuance : 1;
}
gasExcess_ = uint64(excess.min(type(uint64).max));
// The base fee per gas used by this block is the spot price at the
// bonding curve, regardless the actual amount of gas used by this
// block, however, this block's gas used will affect the next
// block's base fee.
basefee_ = Lib1559Math.basefee(
gasExcess_, uint256(_config.basefeeAdjustmentQuotient) * _config.gasTargetPerL1Block
);
}
Instead of issuing _config.gasTargetPerL1Block
for each L1 block we end up issuing uint256 issuance = (_l1BlockOd - lastSyncedBlock) * _config.gasTargetPerL1Block
.
lastSyncedBlock
is only updated every 5 blocks.
if (_l1BlockId > lastSyncedBlock + BLOCK_SYNC_THRESHOLD) {
// Store the L1's state root as a signal to the local signal service to
// allow for multi-hop bridging.
ISignalService(resolve("signal_service", false)).syncChainData(
ownerChainId, LibSignals.STATE_ROOT, _l1BlockId, _l1StateRoot
);
lastSyncedBlock = _l1BlockId;
}
If anchor()
is called on 5 consecutive blocks we end up issuing
in total 15 * _config.gasTargetPerL1Block
instead of 5 * _config.gasTargetPerL1Block
.
When the calculated base fee is compared to the block.basefee
the following happens:
- If
block.basefee
reports the correct base fee this will end up halting the chain since they will not match. - If
block.basefee
is using the same flawed calculation the chain continues but with a severely reduced and incorrect base fee.
Here is a simple POC showing the actual issuance compared to the expected issuance. Paste the code into TaikoL1LibProvingWithTiers.t.sol and run forge test --match-test testIssuance -vv
.
struct Config {
uint32 gasTargetPerL1Block;
uint8 basefeeAdjustmentQuotient;
}
function getConfig() public view virtual returns (Config memory config_) {
config_.gasTargetPerL1Block = 15 * 1e6 * 4;
config_.basefeeAdjustmentQuotient = 8;
}
uint256 lastSyncedBlock = 1;
uint256 gasExcess = 10;
function _calc1559BaseFee(
Config memory _config,
uint64 _l1BlockId,
uint32 _parentGasUsed
)
private
view
returns (uint256 issuance, uint64 gasExcess_)
{
if (gasExcess > 0) {
uint256 excess = uint256(gasExcess) + _parentGasUsed;
uint256 numL1Blocks;
if (lastSyncedBlock > 0 && _l1BlockId > lastSyncedBlock) {
numL1Blocks = _l1BlockId - lastSyncedBlock;
}
if (numL1Blocks > 0) {
issuance = numL1Blocks * _config.gasTargetPerL1Block;
excess = excess > issuance ? excess - issuance : 1;
}
// I have commented out the below basefee calculation
// and return issuance instead to show the actual
// accumulated issuance over 5 L1 blocks.
// nothing else is changed
//gasExcess_ = uint64(excess.min(type(uint64).max));
//basefee_ = Lib1559Math.basefee(
// gasExcess_, uint256(_config.basefeeAdjustmentQuotient) * _config.gasTargetPerL1Block
//);
}
//if (basefee_ == 0) basefee_ = 1;
}
function testIssuance() external {
uint256 issuance;
uint256 issuanceAdded;
Config memory config = getConfig();
for (uint64 x=2; x <= 6 ;x++){
(issuanceAdded ,) = _calc1559BaseFee(config, x, 0);
issuance += issuanceAdded;
console2.log("added", issuanceAdded);
}
uint256 expectedIssuance = config.gasTargetPerL1Block*5;
console2.log("Issuance", issuance);
console2.log("Expected Issuance", expectedIssuance);
assertEq(expectedIssuance*3, issuance);
Tools Used
Foundry, VScode
Recommended Mitigation Steps
Issue exactly config.gasTargetPerL1Block
for each L1 block.
dantaik (Taiko) confirmed and commented:
This is a valid bug report. Fixed in this PR: https://github.com/taikoxyz/taiko-mono/pull/16543
I don’t see a direct loss of funds here and believe M is the correct severity.
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
3 — High: Assets can be stolen/lost/compromised directly (or indirectly if there is a valid attack path that does not have hand-wavy hypotheticals).
A halted chain leads to frozen funds. The chain will progress for a minimum of 2 blocks since the calculation is correct when
lastSyncedBlock =0
and when_l1BlockID-lastSyncedBlock=1
After the second block the base fee will still be correct as long as
excess < issuance
for both the inflated and correct calculating since both result inexcess=1
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L279-L282if (numL1Blocks > 0) { uint256 issuance = numL1Blocks * _config.gasTargetPerL1Block; excess = excess > issuance ? excess - issuance : 1; }
At the block where the base fee is incorrect the chain is halted and funds are locked since the anchor now reverts in perpetuity.
In practice Taiko can easily release all funds by upgrading the contracts but I believe such an intervention should not be considered when evaluating the severity of an issue. From C4 Supreme Court session, Fall 2023
Contract upgradability should never be used as a severity mitigation, i.e. we assume contracts are non-upgradable.
I therefore believe a High is fair here.
I don’t entirely agree since the chain would be halted so soon in its existence, that being said, some amount of funds, albeit small, would likely be lost. @dantaik / @adaki2004 any last comments before leaving as H severity?
Agreed, can do!
Awarding as H, final decision.
[H-02] Validity and contests bond ca be incorrectly burned for the correct and ultimately verified transition
Submitted by monrel, also found by t0x1c
Both validity and contests bonds can be wrongfully slashed even if the transition ends up being the correct and verified one.
The issue comes from the fact that the history of the final verified transition is not taken into account.
Example 1: Validity bond is wrongfully burned:
- Bob Proves transition T1 for parent P1
- Alice contests and proves T2 for parent P1 with higher tier proof.
- Guardians steps in to correctly prove T1 for parent P2.
At step 2 Bob loses his bond and is permanentley written out of the history of P1
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibProving.sol#L387-L392
_ts.validityBond = _tier.validityBond;
_ts.contestBond = 1;
_ts.contester = address(0);
_ts.prover = msg.sender;
_ts.tier = _proof.tier;
Example 2: Contest bond wrongfully slashed:
- Alice proves T1 for parent P1 with SGX
- Bob contests T1 for parent P1
- Alice proves T1 with SGX_ZK parent P1
- Guardian steps in to correctly disprove T1 with T2 for parent P1
Bob was correct and T1 was ultimately proven false. Bob still loses his contest bond.
When the guardian overrides the proof they can not pay back Bob’s validity or contesting bond. They are only able to pay back a liveness bond
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibProving.sol#L189-L199
if (isTopTier) {
// A special return value from the top tier prover can signal this
// contract to return all liveness bond.
bool returnLivenessBond = blk.livenessBond > 0 && _proof.data.length == 32
&& bytes32(_proof.data) == RETURN_LIVENESS_BOND;
if (returnLivenessBond) {
tko.transfer(blk.assignedProver, blk.livenessBond);
blk.livenessBond = 0;
}
}
These funds are now frozen since they are sent to the Guardian contract which has no ability to recover them.
uint256 bondToReturn = uint256(ts.validityBond) + blk.livenessBond;
if (ts.prover != blk.assignedProver) {
bondToReturn -= blk.livenessBond >> 1;
}
IERC20 tko = IERC20(_resolver.resolve("taiko_token", false));
tko.transfer(ts.prover, bondToReturn)
ts.prover
will be the Guardian since they are the last to prove the block
Proof of Concept
POC for example 1. Paste the below code into the TaikoL1LibProvingWithTiers.t
file and run forge test --match-test testProverLoss -vv
function testProverLoss() external{
giveEthAndTko(Alice, 1e7 ether, 1000 ether);
giveEthAndTko(Carol, 1e7 ether, 1000 ether);
giveEthAndTko(Bob, 1e6 ether, 100 ether);
console2.log("Bob balance:", tko.balanceOf(Bob));
uint256 bobBalanceBefore = tko.balanceOf(Bob);
vm.prank(Bob, Bob);
bytes32 parentHash = GENESIS_BLOCK_HASH;
uint256 blockId = 1;
(TaikoData.BlockMetadata memory meta,) = proposeBlock(Alice, Bob, 1_000_000, 1024);
console2.log("Bob balance After propose:", tko.balanceOf(Bob));
mine(1);
bytes32 blockHash = bytes32(1e10 + blockId);
bytes32 stateRoot = bytes32(1e9 + blockId);
(, TaikoData.SlotB memory b) = L1.getStateVariables();
uint64 lastVerifiedBlockBefore = b.lastVerifiedBlockId;
// Bob proves transition T1 for parent P1
proveBlock(Bob, Bob, meta, parentHash, blockHash, stateRoot, meta.minTier, "");
console2.log("Bob balance After proof:", tko.balanceOf(Bob));
uint16 minTier = meta.minTier;
// Higher Tier contests by proving transition T2 for same parent P1
proveHigherTierProof(meta, parentHash, bytes32(uint256(1)), bytes32(uint256(1)), minTier);
// Guardian steps in to prove T1 is correct transition for parent P1
proveBlock(
David, David, meta, parentHash, blockHash, stateRoot, LibTiers.TIER_GUARDIAN, ""
);
vm.roll(block.number + 15 * 12);
vm.warp(
block.timestamp + tierProvider().getTier(LibTiers.TIER_GUARDIAN).cooldownWindow * 60
+ 1
);
vm.roll(block.number + 15 * 12);
vm.warp(
block.timestamp + tierProvider().getTier(LibTiers.TIER_GUARDIAN).cooldownWindow * 60
+ 1
);
// When the correct transition T1 is verified Bob does permantley loses his validitybond
// even though it is the correct transition for the verified parent P1.
verifyBlock(Carol, 1);
parentHash = blockHash;
(, b) = L1.getStateVariables();
uint64 lastVerifiedBlockAfter = b.lastVerifiedBlockId;
assertEq(lastVerifiedBlockAfter, lastVerifiedBlockBefore + 1 ); // Verification completed
uint256 bobBalanceAfter = tko.balanceOf(Bob);
assertLt(bobBalanceAfter, bobBalanceBefore);
console2.log("Bob Loss:", bobBalanceBefore - bobBalanceAfter);
console2.log("Bob Loss without couting livenessbond:", bobBalanceBefore - bobBalanceAfter - 1e18); // Liveness bond is 1 ETH in tests
}
Tools Used
Foundry, VScode
Recommended Mitigation Steps
The simplest solution is to allow the guardian to pay back validity and contest bonds in the same manner as for liveness bonds. This keeps the simple design while allowing bonds to be recovered if a prover or contesters action is ultimately proven correct.
Guardian will pass in data in _proof.data
that specifies the address, tiers and bond type that should be refunded. Given that Guardians already can verify any proof this does not increase centralization.
We also need to not to not recover any reward when we prove with Guardian and _overrideWithHigherProof()
is called. If the ts.validityBond
reward is sent to the Guardian it will be locked. Instead we need to keep it in TaikoL1 such that it can be recovered as described above
+if (_tier.contestBond != 0){
unchecked {
if (reward > _tier.validityBond) {
_tko.transfer(msg.sender, reward - _tier.validityBond);
} else {
_tko.transferFrom(msg.sender, address(this), _tier.validityBond - reward);
}
}
+}
This is a valid report but we knew this “flaw” and the current behavior is by design.
- The odd that a valid transition is proven, then contested and overwritten by another proof, then proven again with even a higher tier should be rare, if this happens even once, we should know the second prover is buggy and shall change the tier configuration to remove it.
- For provers who suffer a loss due to such prover bugs, Taiko foundation may send them compensation to cover there loss. We do not want to handle cover-your-loss payment in the protocol.
adaki2004 (Taiko) confirmed, but disagreed with severity and commented:
This is an attack on the tier system, right ? But the economical disincentives doing so shall be granted by the bonds - not to challenge proofs which we do know are correct, just to make someone lose money as there is no advantage. The challenger would lose even more money - and the correct prover would be refunded by Taiko Foundation.
Severity: medium, (just as: https://github.com/code-423n4/2024-03-taiko-findings/issues/227)
I am going to leave as H, I think there is a direct loss of funds here.
This comment:
The challenger would lose even more money
Makes me second guess that slightly, but still think H is correct.
[H-03] Users will never be able to withdraw their claimed airdrop fully in ERC20Airdrop2.sol contract
Submitted by MrPotatoMagic, also found by Aymen0909, alexfilippov314, pa6kuda, and t4sk
Context: The ERC20Airdrop2.sol contract is for managing Taiko token airdrop for eligible users, but the withdrawal is not immediate and is subject to a withdrawal window.
Users can claim their tokens within claimStart and claimEnd. Once the claim window is over at claimEnd, they can withdraw their tokens between claimEnd and claimEnd + withdrawalWindow. During this withdrawal period, the tokens unlock linearly i.e. the tokens only become fully withdrawable at claimEnd + withdrawalWindow.
Issue:
The issue is that once the tokens for a user are fully unlocked, the withdraw() function cannot be called anymore due to the ongoingWithdrawals modifier having a strict claimEnd + withdrawalWindow < block.timestamp
check in its second condition.
Impact: Although the tokens become fully unlocked when block.timestamp = claimEnd + withdrawalWindow, it is extremely difficult or close to impossible for normal users to time this to get their full allocated claim amount. This means that users are always bound to lose certain amount of their eligible claim amount. This lost amount can be small for users who claim closer to claimEnd + withdrawalWindow and higher for those who partially claimed initially or did not claim at all thinking that they would claim once their tokens are fully unlocked.
Coded POC
How to use this POC:
- Add the POC to
test/team/airdrop/ERC20Airdrop2.t.sol
- Run the POC using
forge test --match-test testAirdropIssue -vvv
- The POC demonstrates how alice was only able to claim half her tokens out of her total 100 tokens claimable amount.
function testAirdropIssue() public {
vm.warp(uint64(block.timestamp + 11));
vm.prank(Alice, Alice);
airdrop2.claim(Alice, 100, merkleProof);
// Roll 5 days after
vm.roll(block.number + 200);
vm.warp(claimEnd + 5 days);
airdrop2.withdraw(Alice);
console.log("Alice balance:", token.balanceOf(Alice));
// Roll 6 days after
vm.roll(block.number + 200);
vm.warp(claimEnd + 11 days);
vm.expectRevert(ERC20Airdrop2.WITHDRAWALS_NOT_ONGOING.selector);
airdrop2.withdraw(Alice);
}
Logs
Logs:
> MockERC20Airdrop @ 0x0000000000000000000000000000000000000000
proxy : 0xF62849F9A0B5Bf2913b396098F7c7019b51A820a
impl : 0x2e234DAe75C793f67A35089C9d99245E1C58470b
owner : 0x7FA9385bE102ac3EAc297483Dd6233D62b3e1496
msg.sender : 0x1804c8AB1F12E6bbf3894d4083f33e07309d1f38
this : 0x7FA9385bE102ac3EAc297483Dd6233D62b3e1496
Alice balance: 50
Recommended Mitigation Steps
In the modifier ongoingWithdrawals(), consider adding a buffer window in the second condition that gives users enough time to claim the fully unlocked tokens.
uint256 constant bufferWindow = X mins/hours/days;
modifier ongoingWithdrawals() {
if (claimEnd > block.timestamp || claimEnd + withdrawalWindow < block.timestamp + bufferWindow) {
revert WITHDRAWALS_NOT_ONGOING();
}
_;
}
adaki2004 (Taiko) confirmed and commented:
It is indeed a bug in the flow, while we removed Airdrop2, it is still a confirmed finding on the repo for auditing.
[H-04] Taiko L1 - Proposer can maliciously cause loss of funds by forcing someone else to pay prover’s fee
Submitted by zzebra83, also found by MrPotatoMagic, monrel, mojito_auditor, and ladboy233
Proposal of new blocks triggers a call to proposeBlock in the libProposing library. In that function, there is this the following block of code:
if (params.coinbase == address(0)) {
params.coinbase = msg.sender;
}
This sets the params.coinbase variable set by the caller of the function to be the msg.sender if it was empty.
As part of the process of proposal, hooks can be called of type AssignmentHook. An assignment hook’s onBlockProposed will be triggered as follows:
// When a hook is called, all ether in this contract will be send to the hook.
// If the ether sent to the hook is not used entirely, the hook shall send the Ether
// back to this contract for the next hook to use.
// Proposers shall choose use extra hooks wisely.
IHook(params.hookCalls[i].hook).onBlockProposed{ value: address(this).balance }(
blk, meta_, params.hookCalls[i].data
);
Notice how the meta data is passed to this function. Part of the function of the onBlockProposed is to pay the assigned prover their fee and the payee should be the current proposer of the block. this is done as follows:
// The proposer irrevocably pays a fee to the assigned prover, either in
// Ether or ERC20 tokens.
if (assignment.feeToken == address(0)) {
// Paying Ether
_blk.assignedProver.sendEther(proverFee, MAX_GAS_PAYING_PROVER);
} else {
// Paying ERC20 tokens
IERC20(assignment.feeToken).safeTransferFrom(
_meta.coinbase, _blk.assignedProver, proverFee
);
}
Notice how if the payment is in ERC20 tokens, the payee will be the variable _meta.coinbase, and like we showed earlier, this can be set to any arbitrary address by the proposer. This can lead to a scenario as such:
- proposer A approves the assignmentHook contract to spend a portion of their tokens, the allowance is set higher than the actual fee they will be paying.
- proposer A proposes a block, and a fee is charged and payed to the assigned prover, but there remains allowance that the assignment hook contract can still use.
- proposer B proposes a block and sets params.coinbase as the the address of proposer A.
- proposer A address will be the payee of the fee for the assigned prover for the block proposed by proposer B.
The scenario above describes how someone can be forced maliciously to pay fees for block proposals by other actors.
Recommended Mitigation Steps
A simple fix to this to ensure the block proposer will always be the msg.sender, as such:
if (params.coinbase == address(0 || params.coinbase != msg.sender)) {
params.coinbase = msg.sender;
}
dantaik (Taiko) confirmed and commented:
This is a valid bug report. It has been fixed here: https://github.com/taikoxyz/taiko-mono/pull/16327
[H-05] Signatures can be replayed in withdraw()
to withdraw more tokens than the user originally intended.
Submitted by lightoasis, also found by 0xleadwizard, wangxx2026, alexfilippov314, ladboy233, and Tendency
Signatures can be replayed in withdraw()
to withdraw more tokens than the user originally intended.
Vulnerability Details
In the TimelockTokenPool.sol contracts, users can provide a signature to allow someone else to withdraw all their withdrawable tokens on their behalf using their signature. TimelockTokenPool.sol#L170)
function withdraw(address _to, bytes memory _sig) external {
if (_to == address(0)) revert INVALID_PARAM();
bytes32 hash = keccak256(abi.encodePacked("Withdraw unlocked Taiko token to: ", _to));
@> address recipient = ECDSA.recover(hash, _sig);
_withdraw(recipient, _to);
}
As seen from above, the signature provided does not include a nonce and this can lead to signature replay attacks. Due to the lack of a nonce, withdraw() can be called multiple times with the same signature. Therefore, if a user provides a signature to withdraw all his withdrawable tokens at one particular time, an attacker can repeatedly call withdraw() with the same signature to withdraw more tokens than the user originally intended. The vulnerability is similar to Arbitrum H-01 where user’s signatures could be replayed to use up more votes than a user intended due to a lack of nonce.
Recommended Mitigation Steps
Consider using a nonce or other signature replay protection in the TimelockTokenPool contract.
dantaik (Taiko) confirmed and commented:
Valid bug report, trying to fix it in this PR: https://github.com/taikoxyz/taiko-mono/pull/16611/files
Medium Risk Findings (14)
[M-01] There is no slippage check for the eth deposits processing in the LibDepositing.processDeposits
Submitted by Shield, also found by ladboy233
The LibDepositing.depositEtherToL2
function is called by the TaikoL1.depositEtherToL2 to deposit ether to the L2 chain
. The maximum number of unprocessed eth deposits
are capped at _config.ethDepositRingBufferSize - 1
as shown here:
unchecked {
return _amount >= _config.ethDepositMinAmount && _amount <= _config.ethDepositMaxAmount
&& _state.slotA.numEthDeposits - _state.slotA.nextEthDepositToProcess
< _config.ethDepositRingBufferSize - 1;
}
The Taiko configuration states that ethDepositRingBufferSize == 1024
. Hence the maximum unprocessed eth deposits allowed by the taiko L1 contract is capped at 1023
.
When the L2 block is proposed by calling the LibProposing.proposeBlock
function by the TaikoL1 contract, it processes the unprocessed eth deposits
by calling the LibDepositing.processDeposits
. But it is allowed to process at most 32 eth deposits per block
as per the following conditional check in the processDeposits
function.
deposits_ = new TaikoData.EthDeposit[(numPending.min(_config.ethDepositMaxCountPerBlock));
Here the ethDepositMaxCountPerBlock == 32
as configured in the TaikoL1.getConfig
function.
And the fee amount
for each of the eth deposits
are calculated as follows:
uint96 fee = uint96(_config.ethDepositMaxFee.min(block.basefee * _config.ethDepositGas));
uint96 _fee = deposits_[i].amount > fee ? fee : deposits_[i].amount;
deposits_[i].amount -= _fee;
Hence the deposited eth amount
is deducted by the calculated _fee amount
for the eth deposit transaction
. If the basefee of the L1 block
increases significantly then the maximum fee of ethDepositMaxFee: 1 ether / 10
will be applied. Thus deducting that amount from the transferred eth deposit to the recipient in L2
.
Now let’s consider the following scenario:
nextEthDepositToProcess
is currently100
.numEthDeposits
is1060
currently.- The number of proposed L2 blocks required to process 1060th eth deposit is = 1060 - 100 / 32 = 30 L2 blocks.
- As a result all the
above 30 L2 blocks
will not be proposed in a single L1 block and will require multiple L1 blocks for it. - If there is
huge congestion
in the mainnet during this time theblock.basefee
of the subsequentL1 blocks
would increase. And this could prompt the maximum fee of_config.ethDepositMaxFee
to be charged on the deposited amount (in theLibDepositing.processDeposits
function, since subsequent L1 block would have a higherblock.basefee
) thus prompting loss of funds on the recipient. - For example let’s assume the depositor deposit
1.1 ether
and current gas fee is0.01 ether
. Hence the recipient expects to receive approximately1.09 ether
at the time of the deposit on L1. But when the deposit is processed in a subsequent L1 block, if the fee amount increases to the maximum amount of0.1 ether
then the recipient will only get approximately1 ether
only. This will cost the recipient aloss of 0.9 ether
. If there was a slippage check a depositor can set for his eth deposits then the he can prevent excessive gas costs during processing.
Proof of Concept
unchecked {
return _amount >= _config.ethDepositMinAmount && _amount <= _config.ethDepositMaxAmount
&& _state.slotA.numEthDeposits - _state.slotA.nextEthDepositToProcess
< _config.ethDepositRingBufferSize - 1;
}
ethDepositRingBufferSize: 1024,
ethDepositMinCountPerBlock: 8,
ethDepositMaxCountPerBlock: 32,
uint96 fee = uint96(_config.ethDepositMaxFee.min(block.basefee * _config.ethDepositGas));
uint96 _fee = deposits_[i].amount > fee ? fee : deposits_[i].amount;
deposits_[i].amount -= _fee;
Tools Used
VSCode
Recommended Mitigation Steps
Hence it is recommended to add a slippage check for the fee amount of the deposited eth amount in the LibDepositing.processDeposits
function, since depositEtherToL2
and processDepositare
two different transactions with a delay during which the L1 block basefee
can increase significantly causing loss of funds to the recipient in the form of fee increase.
dantaik (Taiko) acknowledged, but disagreed with severity and commented:
Thank you for the feedback. The current issue is valid but minor as we believe users can choose not to deposit Ether using the
depositEtherToL2
function if there are already many deposits pending in the queue. Going forward, the processing of such deposits will likely be moved to the node software directly.
Agree this is valid, the impact is most likely small, but I think the likelihood of it occurring at some point is relatively high. The user is exposed to non-deterministic behavior that they cannot fully understand ahead of signing a transaction.
[M-02] The top tier prover can not re-prove
Submitted by Shield, also found by zzebra83 and Tendency
In the LibProving.proveBlock
function the top tier prover
is allowed to prove a new transition as long as the new transition is different from the previous transition and assert
conditional checks pass.
if (sameTransition) revert L1_ALREADY_PROVED();
if (isTopTier) {
// The top tier prover re-proves.
assert(tier.validityBond == 0);
assert(ts.validityBond == 0 && ts.contestBond == 0 && ts.contester == address(0));
But the assert condition
of this logic is wrong since it checks for the ts.contestBond == 0
where as it should be ts.contestBond == 1
since 1 is set as the default value for ts.contestBond
parameter for gas savings as shown below:
ts_.contestBond = 1; // to save gas
As a result of the even though code expects the top-tier prover to re-prove a different transition, the transaction will revert.
Proof of Concept
Add the following testcase test_L1_GuardianProverCanOverwriteIfNotSameProof_test
to the TaikoL1LibProvingWithTiers.t.sol
test file.
If you change the ts.contestBond == 0
to ts.contestBond == 1 in the second assert
statement of the LibProving.proveBlock
function, the test will run successfully and transaction execution will succeed.
function test_L1_GuardianProverCanOverwriteIfNotSameProof_test() external {
giveEthAndTko(Alice, 1e7 ether, 1000 ether);
giveEthAndTko(Carol, 1e7 ether, 1000 ether);
console2.log("Alice balance:", tko.balanceOf(Alice));
// This is a very weird test (code?) issue here.
// If this line is uncommented,
// Alice/Bob has no balance.. (Causing reverts !!!)
// Current investigations are ongoing with foundry team
giveEthAndTko(Bob, 1e7 ether, 100 ether);
console2.log("Bob balance:", tko.balanceOf(Bob));
// Bob
vm.prank(Bob, Bob);
bytes32 parentHash = GENESIS_BLOCK_HASH;
for (uint256 blockId = 1; blockId < conf.blockMaxProposals * 3; blockId++) {
printVariables("before propose");
(TaikoData.BlockMetadata memory meta,) = proposeBlock(Alice, Bob, 1_000_000, 1024);
//printVariables("after propose");
mine(1);
bytes32 blockHash = bytes32(1e10 + blockId);
bytes32 stateRoot = bytes32(1e9 + blockId);
// This proof cannot be verified obviously because of
// blockhash:blockId
proveBlock(Bob, Bob, meta, parentHash, stateRoot, stateRoot, LibTiers.TIER_GUARDIAN, "");
// Prove as guardian
proveBlock(
Carol, Carol, meta, parentHash, blockHash, stateRoot, LibTiers.TIER_GUARDIAN, ""
);
vm.roll(block.number + 15 * 12);
uint16 minTier = meta.minTier;
vm.warp(block.timestamp + tierProvider().getTier(LibTiers.TIER_GUARDIAN).cooldownWindow * 60 + 1);
verifyBlock(Carol, 1);
parentHash = blockHash;
}
printVariables("");
}
Tools Used
VSCode
Recommended Mitigation Steps
Hence recommended to update the ts.contestBond == 0
in the second assert
statement to ts.contestBond == 1
in the LibProving.proveBlock
function.
dantaik (Taiko) confirmed and commented:
This is a valid bug report, it has been fixed already here: https://github.com/taikoxyz/taiko-mono/pull/16543
[M-03] retryMessage unable to handle edge cases.
Submitted by josephdara, also found by josephdara, grearlake, Shield, MrPotatoMagic (1, 2), Aymen0909, ladboy233, iamandreiski, lanrebayode77, t0x1c, and Fassi_Security (1, 2)
The function retryMessage()
is unable to handle some edge scenarios listed below.
- Reverting or refunding a sender when the receiver is banned after the transaction is placed in a
RETRIABLE
state. - Message that is suspended after the transaction is placed in a
RETRIABLE
state.
Proof of Concept
A message is set to RETRIABLE
in the processMessage
when transfer fails or if the target address does not satisfy some conditions.
//IN PROCESSMESSAGE
if (_invokeMessageCall(_message, msgHash, gasLimit)) {
_updateMessageStatus(msgHash, Status.DONE);
} else {
_updateMessageStatus(msgHash, Status.RETRIABLE);
}
//_invokeMessageCall() FUNCTION
function _invokeMessageCall(
Message calldata _message,
bytes32 _msgHash,
uint256 _gasLimit
)
private
returns (bool success_)
{
if (_gasLimit == 0) revert B_INVALID_GAS_LIMIT();
assert(_message.from != address(this));
_storeContext(_msgHash, _message.from, _message.srcChainId);
if (
_message.data.length >= 4 // msg can be empty
&& bytes4(_message.data) != IMessageInvocable.onMessageInvocation.selector
&& _message.to.isContract()
) {
success_ = false;
} else {
(success_,) = ExcessivelySafeCall.excessivelySafeCall(
_message.to,
_gasLimit,
_message.value,
64, // return max 64 bytes
_message.data
);
}
// Must reset the context after the message call
_resetContext();
}
The issue here is, when the team attempts suspension of a message which is in a RETRIABLE
state, the code does not block execution.
Same or a banned address.
This is because the proofReceipt[msgHash]
which handles suspension is not checked. The addressBanned[_addr]
is not checked too.
function retryMessage(
Message calldata _message,
bool _isLastAttempt
)
external
nonReentrant
whenNotPaused
sameChain(_message.destChainId)
{
// If the gasLimit is set to 0 or isLastAttempt is true, the caller must
// be the message.destOwner.
if (_message.gasLimit == 0 || _isLastAttempt) {
if (msg.sender != _message.destOwner) revert B_PERMISSION_DENIED();
}
bytes32 msgHash = hashMessage(_message);
if (messageStatus[msgHash] != Status.RETRIABLE) {
revert B_NON_RETRIABLE();
}
// Attempt to invoke the messageCall.
if (_invokeMessageCall(_message, msgHash, gasleft())) {
_updateMessageStatus(msgHash, Status.DONE);
} else if (_isLastAttempt) {
_updateMessageStatus(msgHash, Status.FAILED);
}
Recommended Mitigation Steps
Recheck necessary details to verify that the transaction is still good to go.
Check the proofReceipt[msgHash].receivedAt
and the addressBanned[_addr]
adaki2004 (Taiko) disputed and commented:
I’d dispute this with an addition (see end sentence).
2 cases mentioned here:
- Reverting or refunding a sender when the receiver is banned after the transaction is placed in a RETRIABLE state.
Intentional to NOT refund sender when he/she is banned. (Tho we might remove the “banAddress”, because it confuses a lot of people. The original intention behind banning an address is: NOT be able to call another , very important contract (
message.to
) on behalf of theBridge
, like theSignalService
.)
- Message that is suspended after the transaction is placed in a RETRIABLE state. Not to refund suspended messages is a feature, it is a failsafe/security mechanism. In such case we need to use it, it would be a severe situation and we do not necessary want to refund (by default) the owner, since it might be a fake message on the destination chain. (That can be one reason - so makes no sense to refund). Also suspension would never happen after RETRIABLE. If we suspend message it is between NEW and (DONE or RETRIABLE).
So as we considering removing banning addresses (and not allow
SignalService
to be called) for avoid confusion, but not because it is an issue, but a simplification and to avoid confusion. https://github.com/taikoxyz/taiko-mono/pull/16604
Using this issue to aggregate all of the issues around ban list functionality that has since been removed from the sponsors code base.
Note: For full discussion, see here.
[M-04] A recalled ERC20 bridge transfer can lock tokens in the bridge
Submitted by monrel, also found by monrel
A recalling ERC20 bridge transfer can lock funds in the bridge if the call to mint tokens fail on the source chain. Depending on the Native token logic this could either be a permanent lock or a lock of an unknown period of time.
Example of how this can happen with the provided USDCAdapter:
USDC limits the amount of USDC that can be minted on each chain by giving each minter a minting allowance. If the minting allowance is reach minting will revert. If this happens in a recalled message the tokens together with the ETH value is locked.
Proof of Concept
USDC limits the amount of USDC that can be minted on each chain by giving each minter a minting allowance.
If _amount <= mintingAllowedAmount
is reached for the USDCAdapter
tokens can not be minted but since this is a recalled message the funds are stuck.
Both onMessageIncovation()
and onMessageRecalled()
call _transferToken()
to either mint or release tokens.
function _transferTokens(
CanonicalERC20 memory _ctoken,
address _to,
uint256 _amount
)
private
returns (address token_)
{
if (_ctoken.chainId == block.chainid) {
token_ = _ctoken.addr;
IERC20(token_).safeTransfer(_to, _amount);
} else {
token_ = _getOrDeployBridgedToken(_ctoken);
IBridgedERC20(token_).mint(_to, _amount);
}
}
A recalled message to bridge USDC L1->L2 will revert when we attempt to mint through the USDCAdapter
function _mintToken(address _account, uint256 _amount) internal override {
usdc.mint(_account, _amount);
On the following condition in the native USDC contract
require(
_amount <= mintingAllowedAmount,
"FiatToken: mint amount exceeds minterAllowance"
);
Course of events that ends in locked funds:
- User bridges USDC from L2->L1
- The message is recalled from L1
- The USDCAdapter has reached the
mintingAllowedAmount
- The recalled message is stuck because minting reverts. The USDC and ETH passed in are both locked.
Tools Used
Foundry, VScode
Recommended Mitigation Steps
Add new functionality in the vault that allows users to send a new message to the destination chain again with new message data if onMessageRecalls()
can not mint tokens. We give users the ability to redeem for canonical tokens instead of being stuck.
dantaik (Taiko) acknowledged and commented:
Thank you for your feedback. In your example, if the user cannot mint tokens in USDCAdapter by using
recallMessage
, the user can wait and callrecallMessage
again. That’s why recallMessage is retriable.There is no perfect solution here, and I personally don’t want to make the bridge too complicated by introducing this re-send-another-message feature.
Adding a warning on the bridge UI to show a warning message might be a good solution, something like “USDC on Taiko has reached 95% of its max supply cap, bridging USDC to Taiko may end up your fund becoming unavailable for some time until others bridge USDC away from Taiko”.
The sponsor comments simply show their isn’t a great solution to the problem, it still represents a loss of user funds (if it goes on forever) or a denial of service and a risk that users should be aware of.
@dontonka / @adaki2004 (Taiko) any last comments here?
We are OK with med, no issue. Please proceed accordinlgy - as we dont have:
- the perfect solution to the problem
- the intention to fix it ATM - since we wont be using any native tokens anytime soon.
But please proceed the way which is suitable for the wardens better, we appreciate their efforts. (So not questioning the severity)
[M-05] Bridge watcher can forge arbitrary message and drain bridge
Submitted by monrel, also found by josephdara, Shield, and t0x1c
The bridge_watchdog
role can forge arbitrary messages and drain the bridge of all ETH and tokens.
Proof of Concept
bridge_watchdog
can call suspendMessasges()
to suspend and un-suspend a message
function suspendMessages(
bytes32[] calldata _msgHashes,
bool _suspend
)
external
onlyFromOwnerOrNamed("bridge_watchdog")
{
uint64 _timestamp = _suspend ? type(uint64).max : uint64(block.timestamp);
for (uint256 i; i < _msgHashes.length; ++i) {
bytes32 msgHash = _msgHashes[i];
proofReceipt[msgHash].receivedAt = _timestamp;
emit MessageSuspended(msgHash, _suspend);
}
}
When this function is called to un-suspend a message we set proofReceipt[msgHash] = _timestamp
. If the msgHash was not proven before it will now be treated as proven since any msgHash
with a timestamp != 0
is treated as proven
uint64 receivedAt = proofReceipt[msgHash].receivedAt;
bool isMessageProven = receivedAt != 0
bridge_watchdog
can therefore forge arbitrary messages and have them treated as proven by first suspending them and then un-suspending them.
bride_watchdog
is supposed to only be able to ban and suspend messages, in the expected worst case bridge_watchdog
is limited to DDOSing messages and bans until governance removes the the bridge_watchdog
.
With the privilege escalation shown here the role can instead drain the bridge of all ETH and tokens.
POC
Here is a POC showing that we can forge an arbitrary message by suspending and un-suspending a message
To run this POC first change the following code in Bridge.t.sol so that we use a real signalService
register(
+address(addressManager), "signal_service", address(signalService), destChainId
-address(addressManager), "signal_service", address(mockProofSignalService), destChainId
);
Paste the below code and run into Bridge.t.sol and run forge test --match-test testWatchdogDrain -vvv
function testWatchdogDrain() public {
uint256 balanceBefore = Bob.balance;
IBridge.Message memory message = IBridge.Message({
id: 0,
from: address(bridge),
srcChainId: uint64(block.chainid),
destChainId: destChainId,
srcOwner: Alice,
destOwner: Alice,
to: Bob,
refundTo: Alice,
value: 10 ether,
fee: 1,
gasLimit: 1_000_000,
data: "",
memo: ""
});
bytes memory proof = hex"00";
bytes32 msgHash = destChainBridge.hashMessage(message);
bytes32[] memory msgHashA = new bytes32[](1);
msgHashA[0] = msgHash;
vm.prank(Alice);
destChainBridge.suspendMessages(msgHashA, true);
vm.prank(Alice);
destChainBridge.suspendMessages(msgHashA, false);
vm.chainId(destChainId);
vm.prank(Bob, Bob);
destChainBridge.processMessage(message, proof);
IBridge.Status status = destChainBridge.messageStatus(msgHash);
assertEq(status == IBridge.Status.DONE, true);
console2.log("Bobs Stolen funds", Bob.balance-balanceBefore);
console2.log("We have successfully processed a message without actually proving it!");
}
Tools Used
Foundry, VScode
Recommended Mitigation Steps
Un-suspended messages should be set to 0 and be proven or re-proven.
function suspendMessages(
bytes32[] calldata _msgHashes,
bool _suspend
)
external
onlyFromOwnerOrNamed("bridge_watchdog")
{
+ uint64 _timestamp = _suspend ? type(uint64).max : 0;
for (uint256 i; i < _msgHashes.length; ++i) {
bytes32 msgHash = _msgHashes[i];
proofReceipt[msgHash].receivedAt = _timestamp;
emit MessageSuspended(msgHash, _suspend);
}
}
dantaik (Taiko) confirmed and commented:
This is a valid bug report. The bug has been fixed in this PR: https://github.com/taikoxyz/taiko-mono/pull/16545
0xean (Judge) decreased severity to Medium and commented:
Good report, but I am not sure it qualifies as H severity and most likely should be M.
I think there is a pre-condition here (a malicious watchdog).
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
Agree that if this attack is feasible, it represents privilege escalation.
As pointed out previously:
Privilege escalation issues are judged by likelihood and impact and their severity is uncapped.
Note: For full discussion, see here.
[M-06] First block proposer check in the LibProposing._isProposerPermitted
function is errorneous
Submitted by Shield, also found by monrel and blockdev
The LibProposing.proposeBlock
function calls the _isProposerPermitted
private function, to ensure if the proposer is set
. Only that specific address has the permission to propose the block.
In the _isProposerPermitted
function, for the first block after the genesis block only the proposerOne
is allowed to propose the first block as shown below:
address proposerOne = _resolver.resolve("proposer_one", true);
if (proposerOne != address(0) && msg.sender != proposerOne) {
return false;
}
But the issue here is that when the msg.sender == proposerOne
the function does not return true
if the following conditions occur.
If the proposer != address(0) && msg.sender != proposer
. In which case even though the msg.sender == proposerOne
is true
for the first block the _isProposerPermitted
will still return false
thus reverting the block proposer for the first block.
Hence even though the proposer_one
is the proposer of the first block the transaction will still revert if the above mentioned conditions occur and the _isProposerPermitted
returns false
for the first block after the genesis block.
Hence this will break the block proposing logic since the proposal of the first block after the genesis block reverts thus not allowing subsequent blocks to be proposed.
Proof of Concept
TaikoData.SlotB memory b = _state.slotB;
if (!_isProposerPermitted(b, _resolver)) revert L1_UNAUTHORIZED();
function _isProposerPermitted(
TaikoData.SlotB memory _slotB,
IAddressResolver _resolver
)
private
view
returns (bool)
{
if (_slotB.numBlocks == 1) {
// Only proposer_one can propose the first block after genesis
address proposerOne = _resolver.resolve("proposer_one", true);
if (proposerOne != address(0) && msg.sender != proposerOne) {
return false;
}
}
address proposer = _resolver.resolve("proposer", true);
return proposer == address(0) || msg.sender == proposer;
}
Tools Used
VSCode
Recommended Mitigation Steps
It is recommended to add logic in the LibProposing._isProposerPermitted
function to return true
when the msg.sender == proposerOne
, for proposing the first block after genesis block.
dantaik (Taiko) confirmed and commented:
I think this is a valid bug: fixing it here
[M-07] Incorrect _Essentialinit() function is used in TaikoToken making snapshooter devoid of calling snapshot()
Submitted by MrPotatoMagic, also found by Limbooo, imare, and t0x1c
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/TaikoToken.sol#L34
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/TaikoToken.sol#L52
The EssentialContract.sol contract is inherited by the TaikoToken contract. This essential contract contains two _Essentialinit() functions, one with an owner parameter only (see here) and the other with owner and address manager parameters (see here).
The issue with the current code is that it uses the _Essentialinit() function with the owner parameter only. This would cause the onlyFromOwnerOrNamed(“snapshooter”) modifier on the snapshot function to not be able to resolve the snapshooter role since the address manager contract was never set during initialization, thus causing a revert.
Due to this:
- Snapshooter role is denied from taking snapshots.
- Timely snapshots for certain periods could have failed by the snapshooter since they would have required the owner to jump in by the time the issue was realized.
- Correct/Intended functionality of the protocol is affected i.e. the snapshooter role assigned to an address cannot ever perform its tasks validly.
Proof of Concept
Here is the whole process:
- Snapshooter address calls the snapshot() function. The onlyFromOwnerOrNamed(“snapshooter”) modifier is encountered first.
File: TaikoToken.sol
57: function snapshot() public onlyFromOwnerOrNamed("snapshooter") {
58: _snapshot();
59: }
- In the second condition, the modifier calls the resolve() function with the “snapshooter” role as
_name
in order to check if the caller (msg.sender) is indeed the address approved by the owner.
File: EssentialContract.sol
46: modifier onlyFromOwnerOrNamed(bytes32 _name) {
47: if (msg.sender != owner() && msg.sender != resolve(_name, true))
48: revert RESOLVER_DENIED();
49: _;
50: }
- The resolve() function is called which internally calls the function _resolve(). In the function _resolve(), the condition on Line 69 evaluates to true and we revert. This is because the addressManager address was never set during initialization using the _Essentialinit() function with the owner and address manager parameters. Due to this, the snapshooter address is denied from performing it’s allocated tasks.
File: AddressResolver.sol
64: function _resolve(
65: uint64 _chainId,
66: bytes32 _name,
67: bool _allowZeroAddress
68: ) private view returns (address payable addr_) {
69: if (addressManager == address(0)) revert RESOLVER_INVALID_MANAGER();
70:
71: addr_ = payable(
72: IAddressManager(addressManager).getAddress(_chainId, _name)
73: );
74:
75: if (!_allowZeroAddress && addr_ == address(0)) {
76: revert RESOLVER_ZERO_ADDR(_chainId, _name);
77: }
78: }
Recommended Mitigation Steps
In the init() function, consider using the _Essentialinit() function with the owner and address manager parameters instead of the _Essentialinit() function with the owner parameter. This would allow the snapshooter address to proceed with taking snapshots as expected.
dantaik (Taiko) confirmed and commented:
This is a valid bug report. The bug is fixed by https://github.com/taikoxyz/taiko-mono/commit/c64ec193c95113a4c33692289e23e8d9fa864073
[M-08] Bridged tokens would be lost if sender and receiver are contracts that don’t implement fallback/receive
Submitted by Shield, also found by Shield
When a bridged token is received on the dest chain, ERC20Vault.onMessageInvocation()
is being called.
onMessageInvocation()
always calls to.sendEther(msg.value)
even when the msg.value
is zero.
sendEther()
would attempt to call the contract with the value supplied and empty data. If the to
address ia a contract that doesn’t implement neither the fallback function nor receive then the entire transaction would revert.
The same issue occurs during recalling the message, if the sender is also a contract that doesn’t implement neither a fallback nor receive then the recalling would fail as well.
Impact
Funds would be lost, since the sending can’t be finalized and recovering would revert as well.
While this might be considered a user error when sending a value that’s greater than zero (they should’ve checked that the to
address implements the receiver), this can’t be said about the case where the value is zero - the user won’t expect the vault to call the to
contract with zero value.
Proof of Concept
Add the following PoC to test\tokenvault\ERC20Vault.t.sol
:
Coded PoC
contract NoFallback{
// other functions might be implemented here, but neither a fallback nor receive
}
function test_noFallback()
public
{
vm.startPrank(Alice);
vm.chainId(destChainId);
erc20.mint(address(erc20Vault));
uint256 amount = 0;
address to = address(new NoFallback());
uint256 erc20VaultBalanceBefore = erc20.balanceOf(address(erc20Vault));
uint256 toBalanceBefore = erc20.balanceOf(to);
destChainIdBridge.sendReceiveERC20ToERC20Vault(
erc20ToCanonicalERC20(destChainId),
Alice,
to,
amount,
bytes32(0),
address(erc20Vault),
srcChainId,
0
);
uint256 erc20VaultBalanceAfter = erc20.balanceOf(address(erc20Vault));
assertEq(erc20VaultBalanceBefore - erc20VaultBalanceAfter, amount);
uint256 toBalanceAfter = erc20.balanceOf(to);
assertEq(toBalanceAfter - toBalanceBefore, amount);
}
function test_20Vault_onMessageRecalled_20() public {
Alice = address(new NoFallback());
erc20.mint(Alice);
vm.startPrank(Alice);
uint256 amount = 2 wei;
erc20.approve(address(erc20Vault), amount);
uint256 aliceBalanceBefore = erc20.balanceOf(Alice);
uint256 erc20VaultBalanceBefore = erc20.balanceOf(address(erc20Vault));
IBridge.Message memory _messageToSimulateFail = erc20Vault.sendToken(
ERC20Vault.BridgeTransferOp(
destChainId, address(0), Bob, address(erc20), amount, 1_000_000, 0, Bob, ""
)
);
uint256 aliceBalanceAfter = erc20.balanceOf(Alice);
uint256 erc20VaultBalanceAfter = erc20.balanceOf(address(erc20Vault));
assertEq(aliceBalanceBefore - aliceBalanceAfter, amount);
assertEq(erc20VaultBalanceAfter - erc20VaultBalanceBefore, amount);
// No need to imitate that it is failed because we have a mock SignalService
bridge.recallMessage(_messageToSimulateFail, bytes(""));
uint256 aliceBalanceAfterRecall = erc20.balanceOf(Alice);
uint256 erc20VaultBalanceAfterRecall = erc20.balanceOf(address(erc20Vault));
// Release -> original balance
assertEq(aliceBalanceAfterRecall, aliceBalanceBefore);
assertEq(erc20VaultBalanceAfterRecall, erc20VaultBalanceBefore);
}
Output:
Failing tests:
Encountered 1 failing test in test/tokenvault/ERC20Vault.t.sol:TestERC20Vault
[FAIL. Reason: ETH_TRANSFER_FAILED()] test_20Vault_receive_erc20_canonical_to_dest_chain_transfers_from_canonical_token() (gas: 201153)
Failing tests:
Encountered 1 failing test in test/tokenvault/ERC20Vault.t.sol:TestERC20Vault
[FAIL. Reason: ETH_TRANSFER_FAILED()] test_20Vault_onMessageRecalled_20()
Recommended Mitigation Steps
-
Don’t call
sendEther()
when the value is zero- Or modify
sendEther()
to return when the value is zero
- Or modify
-
Find a solution for cases when the value is non-zero
- This one is a bit more complicated, one way might be to allow the sender to request the ERC20 token while giving up on the ETH
dantaik (Taiko) confirmed and commented:
We have change the sendEther function such that if the amount is 0, there is no further action and the sendEther function simply returns.
If if default and receive functions are both unimplemented on the destination chain for the to address, then the owner can fail the message with retryMessage({…, _lastAttemp=true}); >or use failMessage(…) , then on the source chain, the owner can call recallMessage to get back his tokens.
At the end of the day, the user must trust the dapp that use our Bridge to set the right message parameters.
[M-09] LibProposing:proposeBlock allows blocks with a zero parentMetaHash to be proposed after the genesis block and avoid parent block verification
Submitted by joaovwfreire
The proposeBlock at the LibProposing library has the following check to ensure the proposed block has the correct parentMetaHash set:
function proposeBlock(
TaikoData.State storage _state,
TaikoData.Config memory _config,
IAddressResolver _resolver,
bytes calldata _data,
bytes calldata _txList
)
internal
returns (TaikoData.BlockMetadata memory meta_, TaikoData.EthDeposit[] memory deposits_)
{
...
if (params.parentMetaHash != 0 && parentMetaHash != params.parentMetaHash) {
revert L1_UNEXPECTED_PARENT();
}
...
}
However, there are no sanity checks to ensure params.parentMetaHash is not zero outside of the genesis block.
Impact
Malicious proposers can propose new blocks without any parentMetaHash. This can induce a maliciously-generated block to be artificially contested as the final block relies on data held by the meta_ variable. Snippet 1:
TaikoData.Block memory blk = TaikoData.Block({
metaHash: keccak256(abi.encode(meta_)),
...
})
This also generates issues for independent provers, as they may not utilize the proposed block’s data to attempt to prove it and utilize the correct parentMetaHash, which will make the LibProving:proveBlock call revert with an L1_BLOCK_MISTATCH error:
function proveBlock
...
if (blk.blockId != _meta.id || blk.metaHash != keccak256(abi.encode(_meta))) {
revert L1_BLOCK_MISMATCH();
}
...
}
Also, according to the documentation, If the parent block hash is incorrect, the winning transition won’t be used for block verification, and the prover will forfeit their validity bond entirely. If a maliciously proposed block with zero parent block hash is contested and a higher-tier prover ends up proving the proposed block, then he/she loses its own validity bond.
Proof of Concept
The test suite contains the TaikoL1TestBase:proposeBlock that creates new block proposals with a zero parentMetaHash. This is called multiple times at tests like testL1\verifyingmultipleblocksonce and testL1_multipleblocksinoneL1_block at the TaikoL1.t.sol test file, demonstrating the lack of reversion if the parentMetaHash is not zero outside of the genesis block.
Recommended Mitigation Steps
Make sure to check the parentMetaHash value is not zero if it isn’t at the genesis block, otherwise users are going to be able to wrongly induce contestations.
[M-10] The decision to return the liveness bond depends solely on the last guardian
Submitted by alexfilippov314, also found by t0x1c
Vulnerability details
The GuardianProver
contract is a multisig that might contest any proof in some exceptional cases (bugs in the prover or verifier). To contest a proof, a predefined number of guardians should approve a hash of the message that includes _meta
and _tran
.
function approve(
TaikoData.BlockMetadata calldata _meta,
TaikoData.Transition calldata _tran,
TaikoData.TierProof calldata _proof
)
external
whenNotPaused
nonReentrant
returns (bool approved_)
{
if (_proof.tier != LibTiers.TIER_GUARDIAN) revert INVALID_PROOF();
bytes32 hash = keccak256(abi.encode(_meta, _tran));
approved_ = approve(_meta.id, hash);
if (approved_) {
deleteApproval(hash);
ITaikoL1(resolve("taiko", false)).proveBlock(_meta.id, abi.encode(_meta, _tran, _proof));
}
emit GuardianApproval(msg.sender, _meta.id, _tran.blockHash, approved_);
}
The issue arises from the fact that the approved message doesn’t include the _proof
. It means that the last approving guardian can provide any desired value in the _proof
. The data from the _proof
is used to determine whether it is necessary to return the liveness bond to the assigned prover or not:
if (isTopTier) {
// A special return value from the top tier prover can signal this
// contract to return all liveness bond.
bool returnLivenessBond = blk.livenessBond > 0 && _proof.data.length == 32
&& bytes32(_proof.data) == RETURN_LIVENESS_BOND;
if (returnLivenessBond) {
tko.transfer(blk.assignedProver, blk.livenessBond);
blk.livenessBond = 0;
}
}
As a result, the last guardian can solely decide whether to return the liveness bond to the assigned prover or not.
Impact
The decision to return the liveness bond depends solely on the last guardian.
Recommended Mitigation Steps
Consider including the _proof
in the approved message.
bytes32 hash = keccak256(abi.encode(_meta, _tran, _proof));
dantaik (Taiko) confirmed and commented:
Now guardian provers must also agree on whether liveness bond should be returned bytes32(proof.data) == LibStrings.HRETURNLIVENESSBOND, so the hash they sign is now:
bytes32 hash = keccak256(abi.encode(_meta, _tran, _proof.data));
rather than the previous code:
bytes32 hash = keccak256(abi.encode(_meta, _tran));
[M-11] Proposers would choose to avoid higher tier by exploiting non-randomness of parameter used in getMinTier()
Submitted by t0x1c, also found by Mahi_Vasisth
The issue exists for both MainnetTierProvider.sol
and TestnetTierProvider.sol
. For this report, we shall concentrate only on describing it via MainnetTierProvider.sol
.
The proving tier is chosen by the getMinTier() function which accepts a _rand
param.
File: contracts/L1/tiers/MainnetTierProvider.sol
66: function getMinTier(uint256 _rand) public pure override returns (uint16) {
67: // 0.1% require SGX + ZKVM; all others require SGX
68: @---> if (_rand % 1000 == 0) return LibTiers.TIER_SGX_ZKVM;
69: else return LibTiers.TIER_SGX;
70: }
If _rand % 1000 == 0
, a costlier tier TIER_SGX_ZKVM
is used instead of the cheaper TIER_SGX
. The _rand
param is passed in the form of meta_.difficulty
which is calculated inside proposeBlock()
:
File: contracts/L1/libs/LibProposing.sol
199: // Following the Merge, the L1 mixHash incorporates the
200: // prevrandao value from the beacon chain. Given the possibility
201: // of multiple Taiko blocks being proposed within a single
202: // Ethereum block, we choose to introduce a salt to this random
203: // number as the L2 mixHash.
204: @---> meta_.difficulty = keccak256(abi.encodePacked(block.prevrandao, b.numBlocks, block.number));
205:
206: // Use the difficulty as a random number
207: meta_.minTier = ITierProvider(_resolver.resolve("tier_provider", false)).getMinTier(
208: @---> uint256(meta_.difficulty)
209: );
As can be seen, all the parameters used in L204 to calculate meta_.difficulty
can be known in advance and hence a proposer can choose not to propose when meta_.difficulty
modulus 1000 equals zero, because in such cases it will cost him more to afford the proof (sgx + zk proof in this case).
Impact
Since the proposer will now wait for the next or any future block to call proposeBlock()
instead of the current costlier one, transactions will now take longer to finalilze.
If _rand
were truly random, it would have been an even playing field in all situations as the proposer wouldn’t be able to pick & choose since he won’t know in advance which tier he might get. We would then truly have:
67: // 0.1% require SGX + ZKVM; all others require SGX
Recommended Mitigation Steps
Consider using VRF like solutions to make _rand
truly random.
dantaik (Taiko) acknowledged and commented:
This is a very well known issue.
Using VRF creates a third party dependency which may be a bigger risk for a Based rollup. We’ll monitor how this plays out and mitigate the issue later.
Eventually we will have only 1 (1 “aggregated ZK multiproof”) proof tier, which will be the default/min too. (Maybe keeping guardian for a while to be as a failsafe, but that one also cannot be “picked” with thispseudoe random calculation). Also Taiko foundation will run a proposer node, so in case noone is willing to propose to avoid fees, we will, regardless of cost - at least until we reach the 1 tier maturity.
genesiscrew (Warden) commented:
Considering this report and the responses from the sponsors, I am unable to see how this would impact the function of the protocol in such a way that would deem it a medium risk. I personally think this is informational. The report states proving will take longer because it assumes all proposers will want to avoid paying fees because they can predict the block difficulty. I find that a bit of a stretch.
Not the proving but the liveness (proposing) would take longer as provers would deny to grant signatures to prove blocks - which’s evaluation i happening during
proposeBlock
.But at least +2 years post mainnet taiko foundation is commited to
proposeBlock
every X time intervals (even if not breaking even) to keep the liveness and get over this.And as stated, by the time hopefully this minTier() will vanish in that time - hopefully even in months after launch (not years) when ZK is cheap enough. So for now we would say it is a known issue, we are aware of.
Thank you for the inputs. From what I see, this is being acknowledged by the sponsor as a valid issue which is known to the team. Also important to note that it wasn’t mentioned in the list of C4 “known issues” section on the audit page, so should qualify as a Medium.
Can accept medium.
[M-12] Invocation delays are not honoured when protocol unpauses
Submitted by t0x1c
Context: The protocol has pause()
and unpause() functions inside EssentialContract.sol
which are tracked throughout the protocol via the modifiers whenPaused and whenNotPaused.
Issue: Various delays and time lapses throughout the protocol ignore the effect of such pauses. The example in focus being that of processMessage()
which does not take into account the pause duration while checking invocationDelay and invocationExtraDelay. One impact of this is that it allows a non-preferred executor to front run a preferredExecutor, after an unpause.
File: contracts/bridge/Bridge.sol
233: @---> (uint256 invocationDelay, uint256 invocationExtraDelay) = getInvocationDelays();
234:
235: if (!isMessageProven) {
236: if (!_proveSignalReceived(signalService, msgHash, _message.srcChainId, _proof)) {
237: revert B_NOT_RECEIVED();
238: }
239:
240: receivedAt = uint64(block.timestamp);
241:
242: if (invocationDelay != 0) {
243: proofReceipt[msgHash] = ProofReceipt({
244: receivedAt: receivedAt,
245: preferredExecutor: _message.gasLimit == 0 ? _message.destOwner : msg.sender
246: });
247: }
248: }
249:
250: if (invocationDelay != 0 && msg.sender != proofReceipt[msgHash].preferredExecutor) {
251: // If msg.sender is not the one that proved the message, then there
252: // is an extra delay.
253: unchecked {
254: invocationDelay += invocationExtraDelay;
255: }
256: }
257:
258: @---> if (block.timestamp >= invocationDelay + receivedAt) {
Description & Impact
Consider the following flow:
- Assumption:
invocationDelay = 60 minutes
andinvocationExtraDelay = 30 minutes
. - A message is sent.
- First call to
processMessage()
occurred att
where it was proven by Bob i.e. itsreceivedAt = t
. Bob is marked as thepreferredExecutor
. - Preferred executor should be able to call
processMessage()
att+60
while a non-preferred executor should be able to call it only att+90
due to the code logic on L250. - At
t+55
, protocol is paused. - At
t+100
, protocol is unpaused. - Impact: The 30-minute time window advantage which the preferred executor had over the non-preferred one is now lost to him. L258 now considers the invocation delays to have passed and hence the non-preferred executor can immediately call
processMessage()
by front-running Bob and hence pocketing the reward ofmessage.fee
on L98.
File: contracts/bridge/Bridge.sol
293: // Refund the processing fee
294: if (msg.sender == refundTo) {
295: refundTo.sendEther(_message.fee + refundAmount);
296: } else {
297: // If sender is another address, reward it and refund the rest
298: @---> msg.sender.sendEther(_message.fee);
299: refundTo.sendEther(refundAmount);
300: }
Similar behaviour where the paused time is ignored by the protocol can be witnessed in:
- recallMessage() which similarly uses
invocationDelay
. However, noinvocationExtraDelay
is used there. - TimelockTokenPool.sol for:
// If non-zero, indicates the start time for the recipient to receive
// tokens, subject to an unlocking schedule.
uint64 grantStart;
// If non-zero, indicates the time after which the token to be received
// will be actually non-zero
uint64 grantCliff;
// If non-zero, specifies the total seconds required for the recipient
// to fully own all granted tokens.
uint32 grantPeriod;
// If non-zero, indicates the start time for the recipient to unlock
// tokens.
uint64 unlockStart;
// If non-zero, indicates the time after which the unlock will be
// actually non-zero
uint64 unlockCliff;
// If non-zero, specifies the total seconds required for the recipient
// to fully unlock all owned tokens.
uint32 unlockPeriod;
- TaikoData.sol for:
// The max period in seconds that a blob can be reused for DA.
uint24 blobExpiry;
Recommended Mitigation Steps
Introduce a new variable which keeps track of how much time has already been spent in the valid wait window before a pause happened. Also track the last unpause timestamp (similar to how it is done in pauseProving() and unpausing mechanisms). Also refer my other recommendation under the report titled: “Incorrect calculations for cooldownWindow and provingWindow could cause a state transition to spend more than expected time in these windows”. That will help fix the issue without any further leaks.
dantaik (Taiko) confirmed and commented:
This is a valid bug report, fixing in https://github.com/taikoxyz/taiko-mono/pull/16612
TimelockTokenPool.sol will not have a similar fix as the risk is very managable. Blob caching/sharing is disabled, so no fix for it as well.
[M-13] Taiko SGX Attestation - Improper validation in certchain decoding
Submitted by zzebra83
As part of of its ZK proof setup, Taiko leverages SGX provers. it also enables remote SGX attestation and this is possible via leveraging code from Automata, which provides a modular attestation layer extending machine-level trust to Ethereum via the AutomataDcapV3Attestation repo, which is in scope of this audit.
Anyone with SGX hardware can register their instance to be an SGX prover in the Taiko Network via calling the registerInstance function in SgxVerifier.sol. This is why attestation is critical to prove the reliability and trustworthiness of the SGX prover.
The attestation process of SGX provers is a multi fold process, and starts with calling the verifyParsedQuote function in AutomataDcapV3Attestation.sol. One of the steps involves decoding the certchain provided by the SGX prover, as seen below:
// Step 4: Parse Quote CertChain
IPEMCertChainLib.ECSha256Certificate[] memory parsedQuoteCerts;
TCBInfoStruct.TCBInfo memory fetchedTcbInfo;
{
// 536k gas
parsedQuoteCerts = new IPEMCertChainLib.ECSha256Certificate[](3);
for (uint256 i; i < 3; ++i) {
bool isPckCert = i == 0; // additional parsing for PCKCert
bool certDecodedSuccessfully;
// todo! move decodeCert offchain
(certDecodedSuccessfully, parsedQuoteCerts[i]) = pemCertLib.decodeCert(
authDataV3.certification.decodedCertDataArray[i], isPckCert
);
if (!certDecodedSuccessfully) {
return (false, retData);
}
}
}
after this step is executed, a number of other steps are done including:
Step 5: basic PCK and TCB check Step 6: Verify TCB Level Step 7: Verify cert chain for PCK Step 8: Verify the local attestation sig and qe report sig
The decoding of the certchain happens through the EMCertChainLib lib, and this involves a number of steps, one of which is to validate the decoded notBefore and notAfter tags of the certificate:
{
uint256 notBeforePtr = der.firstChildOf(tbsPtr);
uint256 notAfterPtr = der.nextSiblingOf(notBeforePtr);
bytes1 notBeforeTag = der[notBeforePtr.ixs()];
bytes1 notAfterTag = der[notAfterPtr.ixs()];
if (
(notBeforeTag != 0x17 && notBeforeTag == 0x18)
|| (notAfterTag != 0x17 && notAfterTag != 0x18)
) {
return (false, cert);
}
cert.notBefore = X509DateUtils.toTimestamp(der.bytesAt(notBeforePtr));
cert.notAfter = X509DateUtils.toTimestamp(der.bytesAt(notAfterPtr));
}
These fields determine the time format, whether the notBeforePtr and notAfterPtr are in UTC or generalized time, and are used to ensure consistency in timestamps used to determine the validity period of the certificate.
However the validation can fail because the logic above is faulty, as it will allow the attestor to pass in any value for the notBefore tag, indeeed the condition of:
(notBeforeTag != 0x17 && notBeforeTag == 0x18)
will allow the attestor to pass in any beforetag because the condition will always be false.
Consider if we pass an invalid tag of 0x10:
- notBeforeTag != 0x17 is True.
- notBeforeTag == 0x18 is False.
- full condition is False.
I believe the original intention was to ensure the beforeTag is strictly 0x17 or 0x18, just as with the afterTag. Because of this oversight, a malicious attestor could pass in any notBefore Tag as part of their certificate.
This issue requires attention given the significance of the attestation process of SGX provers within Taiko’s ZK setup. The whole point of attestation is to prove the SGX provers are secure, untampered, and trustworthy, and improper validation related to certificate validity periods can have unforeseen consequences.
Recommended Mitigation Steps
Update the condition as below:
if (
(notBeforeTag != 0x17 && notBeforeTag != 0x18)
|| (notAfterTag != 0x17 && notAfterTag != 0x18)
) {
return (false, cert);
smtmfft (Taiko) confirmed and commented:
I think this is a valid catch, already submitted a fix.
[M-14] Malicious caller of processMessage()
can pocket the fee while forcing excessivelySafeCall()
to fail
Submitted by t0x1c, also found by Shield and ladboy233
The logic inside function processMessage()
provides a reward to the msg.sender if they are not the refundTo
address. However this reward or _message.fee
is awarded even if the _invokeMessageCall()
on L282 fails and the message goes into a RETRIABLE
state. In the retriable state, it has to be called by someone again and the current msg.sender
has no obligation to be the one to call it.
This logic can be gamed by a malicious user using the 63/64th rule specified in EIP-150.
File: contracts/bridge/Bridge.sol
278: // Use the specified message gas limit if called by the owner, else
279: // use remaining gas
280: @---> uint256 gasLimit = msg.sender == _message.destOwner ? gasleft() : _message.gasLimit;
281:
282: @---> if (_invokeMessageCall(_message, msgHash, gasLimit)) {
283: _updateMessageStatus(msgHash, Status.DONE);
284: } else {
285: @---> _updateMessageStatus(msgHash, Status.RETRIABLE);
286: }
287: }
288:
289: // Determine the refund recipient
290: address refundTo =
291: _message.refundTo == address(0) ? _message.destOwner : _message.refundTo;
292:
293: // Refund the processing fee
294: if (msg.sender == refundTo) {
295: refundTo.sendEther(_message.fee + refundAmount);
296: } else {
297: // If sender is another address, reward it and refund the rest
298: @---> msg.sender.sendEther(_message.fee);
299: refundTo.sendEther(refundAmount);
300: }
Description
The _invokeMessageCall()
on L282 internally calls excessivelySafeCall()
on L497. When excessivelySafeCall()
makes an external call, only 63/64th of the gas is used by it. Thus the following scenario can happen:
- Malicious user notices that L285-L307 uses approx 165_000 gas.
- He also notices that L226-L280 uses approx 94_000 gas.
- He calculates that he must provide approximately a minimum of
94000 + (64 * 165000) = 10_654_000
gas so that the function execution does not revert anywhere. - Meanwhile, a message owner has message which has a
_message.gasLimit
of 11_000_000. This is so because thereceive()
function of the contract receiving ether is expected to consume gas in this range due to its internal calls & transactions. The owner expects at least 10_800_000 of gas would be used up and hence has provided some extra buffer. - Note that any message that has a processing requirement of greater than
63 * 165000 = 10_395_000
gas can now become a target of the malicious user. - Malicious user now calls
processMessage()
with a specific gas figure. Let’s use an example figure of{gas: 10_897_060}
. This means only63/64 * (10897060 - 94000) = 10_634_262
is forwarded toexcessivelySafeCall()
and1/64 * (10897060 - 94000) = 168_797
will be kept back which is enough for executing the remaining lines of code L285-L307. Note that since(10897060 - 94000) = 10_803_060
which is less than the message owner’s provided_message.gasLimit
of11_000_000
, what actually gets considered is only10_803_060
. - The external call reverts inside
receive()
due to out of gas error (since 10_634_262 < 10_800_000) and hence_success
is set tofalse
on L44. - The remaining L285-L307 are executed and the malicious user receives his reward.
- The message goes into
RETRIABLE
state now and someone will need to callretryMessage()
later on.
A different bug report titled “No incentive for message non-owners to retryMessage()” has also been raised which highlights the incentivization scheme of retryMessage()
.
Impact
- Protocol can be gamed by a user to gain rewards while additionaly saving money by providing the least possible gas.
- There is no incentive for any external user now to ever provide more than
{gas: 10_897_060}
(approx figure).
Proof of Concept
Apply the following patch to add the test inside protocol/test/bridge/Bridge.t.sol
and run via forge test -vv --mt test_t0x1c_gasManipulation
to see it pass:
diff --git a/packages/protocol/test/bridge/Bridge.t.sol b/packages/protocol/test/bridge/Bridge.t.sol
index 6b7dca6..ce77ce2 100644
--- a/packages/protocol/test/bridge/Bridge.t.sol
+++ b/packages/protocol/test/bridge/Bridge.t.sol
@@ -1,11 +1,19 @@
// SPDX-License-Identifier: MIT
pragma solidity 0.8.24;
import "../TaikoTest.sol";
+contract ToContract {
+ receive() external payable {
+ uint someVar;
+ for(uint loop; loop < 86_990; ++loop)
+ someVar += 1e18;
+ }
+}
+
// A contract which is not our ErcXXXTokenVault
// Which in such case, the sent funds are still recoverable, but not via the
// onMessageRecall() but Bridge will send it back
contract UntrustedSendMessageRelayer {
function sendMessage(
address bridge,
@@ -115,12 +123,71 @@ contract BridgeTest is TaikoTest {
register(address(addressManager), "bridge", address(destChainBridge), destChainId);
register(address(addressManager), "taiko", address(uint160(123)), destChainId);
vm.stopPrank();
}
+
+ function test_t0x1c_gasManipulation() public {
+ //**************** SETUP **********************
+ ToContract toContract = new ToContract();
+ IBridge.Message memory message = IBridge.Message({
+ id: 0,
+ from: address(bridge),
+ srcChainId: uint64(block.chainid),
+ destChainId: destChainId,
+ srcOwner: Alice,
+ destOwner: Alice,
+ to: address(toContract),
+ refundTo: Alice,
+ value: 1000,
+ fee: 1000,
+ gasLimit: 11_000_000,
+ data: "",
+ memo: ""
+ });
+ // Mocking proof - but obviously it needs to be created in prod
+ // corresponding to the message
+ bytes memory proof = hex"00";
+
+ bytes32 msgHash = destChainBridge.hashMessage(message);
+
+ vm.chainId(destChainId);
+ skip(13 hours);
+ assertEq(destChainBridge.messageStatus(msgHash) == IBridge.Status.NEW, true);
+ uint256 carolInitialBalance = Carol.balance;
+
+ uint256 snapshot = vm.snapshot();
+ //**************** SETUP ENDS **********************
+
+
+
+ //**************** NORMAL USER **********************
+ console.log("\n**************** Normal User ****************");
+ vm.prank(Carol, Carol);
+ destChainBridge.processMessage(message, proof);
+
+ assertEq(destChainBridge.messageStatus(msgHash) == IBridge.Status.DONE, true);
+ assertEq(Carol.balance, carolInitialBalance + 1000, "Carol balance mismatch");
+ if (destChainBridge.messageStatus(msgHash) == IBridge.Status.DONE)
+ console.log("message status = DONE");
+
+
+
+ //**************** MALICIOUS USER **********************
+ vm.revertTo(snapshot);
+ console.log("\n**************** Malicious User ****************");
+ vm.prank(Carol, Carol);
+ destChainBridge.processMessage{gas: 10_897_060}(message, proof); // @audit-info : specify gas to force failure of excessively safe external call
+
+ assertEq(destChainBridge.messageStatus(msgHash) == IBridge.Status.RETRIABLE, true); // @audit : message now in RETRIABLE state. Carol receives the fee.
+ assertEq(Carol.balance, carolInitialBalance + 1000, "Carol balance mismatched");
+ if (destChainBridge.messageStatus(msgHash) == IBridge.Status.RETRIABLE)
+ console.log("message status = RETRIABLE");
+ }
+
function test_Bridge_send_ether_to_to_with_value() public {
IBridge.Message memory message = IBridge.Message({
id: 0,
from: address(bridge),
srcChainId: uint64(block.chainid),
destChainId: destChainId,
Tools Used
Foundry
Recommended Mitigation Steps
Reward the msg.sender
(provided it’s a non-refundTo address) with _message.fee
only if _invokeMessageCall()
returns true
. Additionally, it is advisable to release this withheld reward after a successful retryMessage()
to that function’s caller.
dantaik (Taiko) confirmed and commented:
Fixed in https://github.com/taikoxyz/taiko-mono/pull/16613
I don’t think paying fees only when
_invokeMessageCall
returns true is a good idea as this will require the relayer to simulate all transactions without guaranteed reward.
Low Risk and Non-Critical Issues
For this audit, 33 reports were submitted by wardens detailing low risk and non-critical issues. The report highlighted below by MrPotatoMagic received the top score from the judge.
The following wardens also submitted reports: Shield, Sathish9098, rjs, zabihullahazadzoi, JCK, cheatc0d3, DadeKuma, 0x11singh99, monrel, slvDev, grearlake, Masamune, imare, josephdara, t0x1c, sxima, joaovwfreire, alexfilippov314, pfapostol, ladboy233, hihen, Pechenite, clara, pa6kuda, albahaca, foxb868, Myd, t4sk, Fassi_Security, oualidpro, Kalyan-Singh, and n1punp.
[L-01] Consider initializing ContextUpgradeable contract by calling _Contextinit() in TaikoToken.sol
ContextUpgradeable is not initialized in TaikoToken.sol contract. This contract is used in ERC20PermitUpgradeable which is used in ERC20VotesUpgradeable. But neither contract initializes this Context contract when the contracts themselves are intialized.
In TaikoToken.sol here, we can see that the below _Contextinit() function is not called.
File: ContextUpgradeable.sol
18: function __Context_init() internal onlyInitializing {
19: }
20:
21: function __Context_init_unchained() internal onlyInitializing {
22: }
[L-02] _ERC1155Receiverinit() not initialized in ERC1155Vault
Consider initializing these functions in an init() function in the ERC1155Vault contract.
File: ERC1155ReceiverUpgradeable.sol
14: function __ERC1155Receiver_init() internal onlyInitializing {
15: }
16:
17: function __ERC1155Receiver_init_unchained() internal onlyInitializing {
18: }
[L-03] If amountUnlocked in TimelockTokenPool is less than 1e18, rounding down occurs
If amountUnlocked is less than 1e18, round down occurs. This is not a problem since grants will usually be dealing with way higher values and thus higher unlocking. But this would be a problem for team members or advisors getting maybe 10 taiko or less (in case price of taiko is high). So the more frequent the withdrawing there might be chances of losing tokens due to round down.
File: TimelockTokenPool.sol
198: uint128 _amountUnlocked = amountUnlocked / 1e18; // divide first
[L-04] sendSignal() calls can be spammed by attacker to relayer
Since the function is external, an attacker can continuously spam signals to the offchain relayer which is always listening to signals. This would be more cost efficient on Taiko where fees are cheap.
The signals could also be used to mess with the relayer service i.e. by sending a the same signal early by frontrunning a user’s bytes32 signal _parameter.
File: SignalService.sol
68: function sendSignal(bytes32 _signal) external returns (bytes32) {
69: return _sendSignal(msg.sender, _signal, _signal);
70: }
[L-05] Add “Zero if owner will process themself” comment to gasLimit instead of fee
In the current code, the preferredExecutor for executing bridged transactions is determined by whether the gasLimit is 0 or not and not the fee.
File: IBridge.sol
38: // Processing fee for the relayer. Zero if owner will process themself.
39: uint256 fee;
40: // gasLimit to invoke on the destination chain.
41: uint256 gasLimit;
[L-06] Bridge integration issues with swapping protocols
Cross-chain swapping could not occur on chains having long invocation delays since deadline of the swap might expire and become outdated. Consider having custom delays for dapps looking to use bridge.
File: Bridge.sol
459: /// the transactor is not the preferredExecutor who proved this message.
460: function getInvocationDelays()
461: public
462: view
463: virtual
464: returns (uint256 invocationDelay_, uint256 invocationExtraDelay_)
465: {
466: if (
467: block.chainid == 1 // Ethereum mainnet
468: ) {
469: // For Taiko mainnet
470: // 384 seconds = 6.4 minutes = one ethereum epoch
471: return (1 hours, 384 seconds);
472: } else if (
473: block.chainid == 2 || // Ropsten
474: block.chainid == 4 || // Rinkeby
475: block.chainid == 5 || // Goerli
476: block.chainid == 42 || // Kovan
477: block.chainid == 17_000 || // Holesky
478: block.chainid == 11_155_111 // Sepolia
479: ) {
480: // For all Taiko public testnets
481: return (30 minutes, 384 seconds);
482: } else if (block.chainid >= 32_300 && block.chainid <= 32_400) {
483: // For all Taiko internal devnets
484: return (5 minutes, 384 seconds);
485: } else {
486: // This is a Taiko L2 chain where no deleys are applied.
487: return (0, 0);
488: }
489: }
[L-07] sendMessage() does not check if STATUS is equal to NEW
Adding a sanity check would be good to avoid being able to call message that is not in the STATUS = NEW state. This would ensure retriable, recalls and failed txns cannot be repeated again.
File: Bridge.sol
119: function sendMessage(
120: Message calldata _message
121: )
122: external
123: payable
124: override
125: nonReentrant
126: whenNotPaused
127: returns (bytes32 msgHash_, Message memory message_)
128: {
129: // Ensure the message owner is not null.
130: if (
131: _message.srcOwner == address(0) || _message.destOwner == address(0)
132: ) {
133: revert B_INVALID_USER();
134: }
135:
136: // Check if the destination chain is enabled.
137: (bool destChainEnabled, ) = isDestChainEnabled(_message.destChainId);
138:
139: // Verify destination chain and to address.
140: if (!destChainEnabled) revert B_INVALID_CHAINID();
141: if (_message.destChainId == block.chainid) {
142: revert B_INVALID_CHAINID();
143: }
144:
145: // Ensure the sent value matches the expected amount.
146:
148: uint256 expectedAmount = _message.value + _message.fee;
149: if (expectedAmount != msg.value) revert B_INVALID_VALUE();
150:
151: message_ = _message;
152:
153: // Configure message details and send signal to indicate message sending.
154: message_.id = nextMessageId++;
155: message_.from = msg.sender;
156: message_.srcChainId = uint64(block.chainid);
157:
158: msgHash_ = hashMessage(message_);
159:
160: ISignalService(resolve("signal_service", false)).sendSignal(msgHash_);
161: emit MessageSent(msgHash_, message_);
162: }
[L-08] Protocol does not refund extra ETH but implements strict check
The IBridge.sol contract specifies that extra ETH provided when sending a message is refunded back to the user. This currently does not happen since the code implements strict equality check. Using strict equality is better but pointing out the spec described, which would either be followed in the code implemented or the spec should be described properly in the IBridge.sol contract.
File: Bridge.sol
146: uint256 expectedAmount = _message.value + _message.fee;
147: if (expectedAmount != msg.value) revert B_INVALID_VALUE();
[L-09] If a message is suspended before processMessage() is called, the ERC20 tokens on the source chain and Ether are not refunded.
If a message is suspended before processMessage() is called, the status of the message remains new and the ERC20 tokens on the source and the Ether is locked as well. If the message will never be unsuspended, consider refunding the tokens to the user.
File: Bridge.sol
287: if (block.timestamp >= invocationDelay + receivedAt) {
288: // If the gas limit is set to zero, only the owner can process the message.
289: if (_message.gasLimit == 0 && msg.sender != _message.destOwner) {
290: revert B_PERMISSION_DENIED();
291: }
[L-10] User loses all Ether if their address is blacklisted on canonical token
When recalls are made on the source chain using the function recallMessage(), it calls the onMessageRecalled() function on the ERC20Vault contract. The onMessageRecalled() function transfers the ERC20 tokens back to the user along with any Ether that was supplied.
The issue is with this dual transfer where both ERC20 tokens are Ether are transferred to the user in the same call. If the user is blacklisted on the canonical token, the whole call reverts, causing the Ether to be stuck in the Bridge contract.
To understand this, let’s consider a simple example:
- User bridges ERC20 canonical tokens and Ether from chain A to chain B.
- The message call on the destination chain B goes into RETRIABLE status if it fails for the first time. (Note: User can only process after invocation delay).
- On multiple retries after a while, the user decides to make a last attempt, on which the call fails and goes into FAILED status.
- During this time on chain B, the user was blacklisted on the ERC20 canonical token on the source chain.
- When the failure signal is received by the source chain A from chain B, the user calls recallMessage() on chain A only to find out that although the blacklist is only for the canonical ERC20 token, the Ether is stuck as well.
[L-11] onMessageInvocation checks in _invokeMessageCall() can be bypassed to call arbitrary function from Bridge contract
The if block requires the data to be greater than equal to 4 bytes, equal to the onMessageInvocation selector and last but not the least for the target address to be a contract.
What an attacker could do to bypass this expected spec is to pre-compute an address for the destination chain and pass it in _message.to
. He can pass gasLimit = 0 from source to only allow him to process the message on the destination.
On the destination chain, the attacker can deploy his pre-computed contract address and call processMessage() with it from the constructor. For a chain (L2s/L3s) with no invocation delays, the proving + executing of the message data would go through in one single call.
When we arrive at the isContract check below on the _message.to
address, we evaluate to false since the size of the contract during construction is 0. Due to this, the attacker can validly bypass the onMessageInvocation selector that is a requirement/single source of tx origination by the protocol for all transactions occurring from the bridge contract. This breaks a core invariant of the protocol.
File: Bridge.sol
513: if (
514: _message.data.length >= 4 && // msg can be empty
515: bytes4(_message.data) !=
516: IMessageInvocable.onMessageInvocation.selector &&
517: _message.to.isContract()
518: ) {
519: success_ = false;
520: } else {
521: (success_, ) = ExcessivelySafeCall.excessivelySafeCall(
522: _message.to,
523: _gasLimit,
524: _message.value,
525: 64, // return max 64 bytes
526: _message.data
527: );
528: }
[L-12] Consider reading return value from snapshot() function
The snapshot() function returns a uint256 snapshotId. These ids if retrieved earlier can make the devs life easier when taking multiple timely snapshots.
File: TaikoToken.sol
54: function snapshot() public onlyFromOwnerOrNamed("snapshooter") {
55: _snapshot();
56: }
[L-13] One off error in block sync threshold check to sync chain data
The check should be l1BlockId >= lastSyncedBlock + BLOCKSYNC_THRESHOLD since threshold is the minimum threshold.
File: TaikoL2.sol
150: if (_l1BlockId > lastSyncedBlock + BLOCK_SYNC_THRESHOLD) {
151: // Store the L1's state root as a signal to the local signal service to
152: // allow for multi-hop bridging.
153: ISignalService(resolve("signal_service", false)).syncChainData(
154: ownerChainId,
155: LibSignals.STATE_ROOT,
156: _l1BlockId,
157: _l1StateRoot
158: );
Same issue here:
File: LibVerifying.sol
240: if (_lastVerifiedBlockId > lastSyncedBlock + _config.blockSyncThreshold) {
241: signalService.syncChainData(
242: _config.chainId, LibSignals.STATE_ROOT, _lastVerifiedBlockId, _stateRoot
243: );
244: }
[L-14] One-off error when evaluating deposits to process with the ring buffer size
When calculating the deposits to process, we do not want to overwrite existing slots. This is why the last check/condition is implemented.
The issue with the condition is that it is one-off by the max size the ring bugger allows. Since + 1 is already added, make the check < into <= to work to it’s full capacity.
File: LibDepositing.sol
148: unchecked {
149:
150: return
151: _amount >= _config.ethDepositMinAmount &&
152: _amount <= _config.ethDepositMaxAmount &&
153: _state.slotA.numEthDeposits -
154: _state.slotA.nextEthDepositToProcess <
155: _config.ethDepositRingBufferSize - 1;
156: }
[R-01] Consider implementing changeBridgedToken() and btokenBlacklist for ERC721Vault and ERC1155Vault
Both vaults are currently missing these two functions. Implementing them is not required but it would be good as a safety net for high-valued NFT collections in emergency scenarios that could arise.
[R-02] Instead of passing an empty string for the data parameter in NFT vaults on token transfers, allow users to supply data
Allow users to supply the data parameter when transferring tokens from vault to them to ensure any off-chain compatibility/functionality can be built.
File: ERC1155Vault.sol
227: function _transferTokens(
228: CanonicalNFT memory ctoken,
229: address to,
230: uint256[] memory tokenIds,
231: uint256[] memory amounts
232: ) private returns (address token) {
233: if (ctoken.chainId == block.chainid) {
234: // Token lives on this chain
235: token = ctoken.addr;
236:
237: IERC1155(token).safeBatchTransferFrom(
238: address(this),
239: to,
240: tokenIds,
241: amounts,
242: ""
243: );
[R-03] Use named imports to improve readability of the code and avoid polluting the global namespace
File: LibAddress.sol
4: import "@openzeppelin/contracts/utils/Address.sol";
5: import "@openzeppelin/contracts/utils/cryptography/ECDSA.sol";
6: import "@openzeppelin/contracts/utils/introspection/IERC165.sol";
7: import "@openzeppelin/contracts/interfaces/IERC1271.sol";
8: import "../thirdparty/nomad-xyz/ExcessivelySafeCall.sol";
[N-01] Avoid hardcoding data in BridgedERC1155
Instead of hardcoding the data, place it in a constant variable and assign the variables here for better maintainability.
File: BridgedERC1155.sol
53: LibBridgedToken.validateInputs(_srcToken, _srcChainId, "foo", "foo");
[N-02] Missing source()/canonical() function on BridgedERC115 contract
The BridgedERC1155 contract should implement a similar function to source()/canonical() as done in the other two vaults. This would better for external dapps to retrieve the data much easily.
[N-03] Using unchecked arithmetic in for loops is handled by solc compiler 0.8.22 onwards
File: MerkleTrie.sol
205: function _parseProof(bytes[] memory _proof) private pure returns (TrieNode[] memory proof_) {
206: uint256 length = _proof.length;
207: proof_ = new TrieNode[](length);
208: for (uint256 i = 0; i < length;) {
209: proof_[i] = TrieNode({ encoded: _proof[i], decoded: RLPReader.readList(_proof[i]) });
210:
211: unchecked {
212: ++i;
213: }
214: }
215: }
[N-04] Typo in comment in Bytes.sol
Use rather instead of rathern.
File: Bytes.sol
93: /// @notice Slices a byte array with a given starting index up to the end of the original byte
94: /// array. Returns a new array rathern than a pointer to the original.
[N-05] Incorrect comment regarding gasLimit in processMessage()
As confirmed with the sponsor, the comment above the gasLimit variable should be inversed i.e. use gasLeft is called by owner, else gasLimit
File: Bridge.sol
307: } else {
308: // Use the specified message gas limit if called by the owner, else
309: // use remaining gas
310:
311: uint256 gasLimit = msg.sender == _message.destOwner
312: ? gasleft()
313: : _message.gasLimit;
[N-06] Use require instead of assert
Use require instead of assert to avoid Panic error, see solidity docs here.
File: Bridge.sol
503: function _invokeMessageCall(
504: Message calldata _message,
505: bytes32 _msgHash,
506: uint256 _gasLimit
507: ) private returns (bool success_) {
508: if (_gasLimit == 0) revert B_INVALID_GAS_LIMIT();
509: assert(_message.from != address(this));
[N-07] Incorrect natspec comment for proveMessageReceived()
Correct first comment on Line 394 to “msgHash has been received”
File: Bridge.sol
394: /// @notice Checks if a msgHash has failed on its destination chain.
395: /// @param _message The message.
396: /// @param _proof The merkle inclusion proof.
397: /// @return true if the message has failed, false otherwise.
398: function proveMessageReceived(
399: Message calldata _message,
400: bytes calldata _proof
401: ) public view returns (bool) {
[N-08] Missing address(0) check for USDC in USDCAdapter
It is important to implement this check in init() functions since they can only be called once.
File: USDCAdapter.sol
38: function init(address _owner, address _addressManager, IUSDC _usdc) external initializer {
39: __Essential_init(_owner, _addressManager);
40:
41: usdc = _usdc;
42: }
[N-09] srcToken and srcChainId is not updated on old token after migration through changeBridgedToken()
When a token is migrated to another token, the old token still points towards the same srcToken and srcChainId as the new token since they are not updated through changeBridgedToken().
Due to this external dapps integrating and using these values as reference could run into potential issues. Consider clearing them or changing them to some placeholder data representing the src token and chainId but with a prefix.
File: BridgedERC20.sol
123: function canonical() public view returns (address, uint256) {
124: return (srcToken, srcChainId);
125: }
[N-10] MerkleClaimable does not check if claimStart is less than claimEnd
File: MerkleClaimable.sol
90: function _setConfig(uint64 _claimStart, uint64 _claimEnd, bytes32 _merkleRoot) private {
91:
92: claimStart = _claimStart;
93: claimEnd = _claimEnd;
94: merkleRoot = _merkleRoot;
95: }
[N-11] Consider reading return value from snapshot() function
The snapshot() function returns a uint256 snapshotId. These ids if retrieved earlier can make the devs life easier when taking multiple timely snapshots.
File: TaikoToken.sol
54: function snapshot() public onlyFromOwnerOrNamed("snapshooter") {
55: _snapshot();
56: }
[N-12] Guardian proof that is never fully approved by minGuardians is never deleted
A guardian proof hashs is only deleted if it has been approved by min number of guardians in the approval bits. In case it is not, the approval for the hash remains and is not deleted.
File: GuardianProver.sol
50: if (approved_) {
51: deleteApproval(hash);
52: ITaikoL1(resolve("taiko", false)).proveBlock(_meta.id, abi.encode(_meta, _tran, _proof));
53: }
[N-13] Consider making the TIMELOCKADMINROLE undergo a delay when transferring the admin role
The admin is allowed to skip the delay in operations. But the delay should not be skipped when the role is being transferred.
File: TaikoTimelockController.sol
25: function getMinDelay() public view override returns (uint256) {
26: return hasRole(TIMELOCK_ADMIN_ROLE, msg.sender) ? 0 : super.getMinDelay();
27: }
dantaik (Taiko) confirmed commented:
Many of the above are fixed in https://github.com/taikoxyz/taiko-mono/pull/16627/files:
Gas Optimizations
For this audit, 28 reports were submitted by wardens detailing gas optimizations. The report highlighted below by DadeKuma received the top score from the judge.
The following wardens also submitted reports: 0x11singh99, dharma09, zabihullahazadzoi, 0xAnah, slvDev, hunter_w3b, pfapostol, MrPotatoMagic, hihen, albahaca, Sathish9098, IllIllI, Auditor2947, rjs, SAQ, SY_S, SM3_SS, cheatc0d3, clara, pavankv, unique, sxima, 0xhacksmithh, K42, Pechenite, oualidpro, and caglankaan.
Gas Optimizations
Id | Title | Instances | Gas Saved |
---|---|---|---|
[G-01] | Use Array.unsafeAccess to avoid repeated array length checks |
80 | 168,000 |
[G-02] | State variables can be packed into fewer storage slots | 2 | 40,000 |
[G-03] | Structs can be packed into fewer storage slots | 3 | 60,000 |
[G-04] | Multiple mapping s that share an ID can be combined into a single mapping of ID / struct |
1 | 20,084 |
[G-05] | Use of memory instead of storage for struct/array state variables |
2 | 12,600 |
[G-06] | State variables access within a loop | 4 | 1,060 |
[G-07] | Unused non-constant state variables waste gas | 2 | - |
[G-08] | State variables only set in the constructor should be declared immutable |
2 | 40,000 |
[G-09] | Cache state variables with stack variables | 17 | 4,400 |
[G-10] | Modifiers order should be changed | 5 | - |
[G-11] | Low level call can be optimized with assembly |
1 | 248 |
[G-12] | Use of memory instead of calldata for immutable arguments |
116 | 41,658 |
[G-13] | Avoid updating storage when the value hasn’t changed | 12 | 8,400 |
[G-14] | Use of emit inside a loop |
4 | 4,104 |
[G-15] | Use uint256(1)/uint256(2) instead of true/false to save gas for changes |
10 | 170,000 |
[G-16] | Shortcircuit rules can be be used to optimize some gas usage | 1 | 2,100 |
[G-17] | Cache multiple accesses of a mapping/array | 13 | 672 |
[G-18] | Redundant state variable getters | 2 | - |
[G-19] | Using private for constants saves gas |
14 | 117,684 |
[G-20] | require() or revert() statements that check input arguments should be at the top of the function | 4 | - |
[G-21] | Consider activating via-ir for deploying |
- | - |
[G-22] | Function calls should be cached instead of re-calling the function | 12 | - |
[G-23] | Functions that revert when called by normal users can be payable |
37 | 777 |
[G-24] | Caching global variables is more expensive than using the actual variable | 1 | 12 |
[G-25] | Add unchecked blocks for subtractions where the operands cannot underflow |
7 | 595 |
[G-26] | Add unchecked blocks for divisions where the operands cannot overflow |
13 | 2,067 |
[G-27] | Empty blocks should be removed or emit something | 1 | 4,006 |
[G-28] | Usage of uints /ints smaller than 32 bytes (256 bits) incurs overhead |
322 | 1,932 |
[G-29] | Stack variable cost less while used in emitting event | 7 | 63 |
[G-30] | Redundant event fields can be removed |
1 | 358 |
[G-31] | Using pre instead of post increments/decrements | 7 | 5 |
[G-32] | >= /<= costs less gas than > /< |
130 | 390 |
[G-33] | internal functions only called once can be inlined to save gas |
20 | 400 |
[G-34] | Inline modifiers that are only used once, to save gas |
5 | - |
[G-35] | private functions only called once can be inlined to save gas |
41 | 820 |
[G-36] | Use multiple revert checks to save gas | 37 | 74 |
[G-37] | abi.encode() is less efficient than abi.encodepacked() for non-address arguments |
16 | 80 |
[G-38] | Unused named return variables without optimizer waste gas | 20 | 54 |
[G-39] | Consider pre-calculating the address of address(this) to save gas |
40 | - |
[G-40] | Consider using Solady’s FixedPointMathLib |
4 | - |
[G-41] | Reduce deployment costs by tweaking contracts’ metadata | 86 | - |
[G-42] | Emitting constants wastes gas | 4 | 32 |
[G-43] | Update OpenZeppelin dependency to save gas | 54 | - |
[G-44] | Function names can be optimized | 56 | 1,232 |
[G-45] | Avoid zero transfers calls | 10 | - |
[G-46] | Using delete instead of setting mapping/struct to 0 saves gas |
10 | 50 |
[G-47] | Using bool for storage incurs overhead |
10 | 1,000 |
[G-48] | Mappings are cheaper to use than storage arrays | 36 | 75,600 |
[G-49] | Bytes constants are more efficient than string constants | 5 | 1,890 |
[G-50] | Constructors can be marked payable |
3 | 63 |
[G-51] | Inverting the if condition wastes gas |
2 | 6 |
[G-52] | Long revert strings | 27 | 720 |
[G-53] | Nesting if statements that use && saves gas |
23 | 690 |
[G-54] | Counting down when iterating, saves gas | 45 | 270 |
[G-55] | do-while is cheaper than for -loops when the initial check can be skipped |
49 | - |
[G-56] | Lack of unchecked in loops |
39 | 2,340 |
[G-57] | uint comparison with zero can be cheaper |
15 | 60 |
[G-58] | Use assembly to check for address(0) |
74 | 444 |
[G-59] | Use scratch space for building calldata with assembly | 333 | 73,260 |
[G-60] | Use assembly to write hashes | 25 | 3,000 |
[G-61] | Use assembly to validate msg.sender |
17 | 204 |
[G-62] | Use assembly to write address storage values |
18 | 1,332 |
[G-63] | Use assembly to emit an event |
55 | 2,090 |
Total: 2012 instances over 63 issues with an estimate of 866,926 gas saved.
Gas Optimizations
[G-01] Use Array.unsafeAccess
to avoid repeated array length checks
When using storage arrays, solidity adds an internal lookup of the array’s length (a Gcoldsload 2100 gas) to ensure it doesn’t read past the array’s end.
It’s possible to avoid this lookup by using Array.unsafeAccess
in cases where the length has already been checked.
There are 80 instances of this issue.
Expand findings
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
81: if (_serialNumIsRevoked[index][serialNumBatch[i]]) {
84: _serialNumIsRevoked[index][serialNumBatch[i]] = true;
96: if (!_serialNumIsRevoked[index][serialNumBatch[i]]) {
99: delete _serialNumIsRevoked[index][serialNumBatch[i]];
192: EnclaveIdStruct.TcbLevel memory tcb = enclaveId.tcbLevels[i];
215: TCBInfoStruct.TCBLevelObj memory current = tcb.tcbLevels[i];
241: if (pckCpuSvns[i] < tcbCpuSvns[i]) {
263: issuer = certs[i];
265: issuer = certs[i + 1];
268: certRevoked = _serialNumIsRevoked[uint256(IPEMCertChainLib.CRL.ROOT)][certs[i]
269: .serialNumber];
270: } else if (certs[i].isPck) {
271: certRevoked = _serialNumIsRevoked[uint256(IPEMCertChainLib.CRL.PCK)][certs[i]
272: .serialNumber];
280: block.timestamp > certs[i].notBefore && block.timestamp < certs[i].notAfter;
286: certs[i].tbsCertificate, certs[i].signature, issuer.pubKey
424: (certDecodedSuccessfully, parsedQuoteCerts[i]) = pemCertLib.decodeCert(
425: authDataV3.certification.decodedCertDataArray[i], isPckCert
[81, 84, 96, 99, 192, 215, 241, 263, 265, 268-269, 270, 271-272, 280, 286, 424, 425]
File: packages/protocol/contracts/bridge/Bridge.sol
91: bytes32 msgHash = _msgHashes[i];
92: proofReceipt[msgHash].receivedAt = _timestamp;
File: packages/protocol/contracts/L2/TaikoL2.sol
236: inputs[j % 255] = blockhash(j);
[236]
File: packages/protocol/contracts/signal/SignalService.sol
105: hop = hopProofs[i];
[105]
File: packages/protocol/contracts/tokenvault/ERC1155Vault.sol
48: if (_op.amounts[i] == 0) revert VAULT_INVALID_AMOUNT();
252: BridgedERC1155(_op.token).burn(_user, _op.tokenIds[i], _op.amounts[i]);
273: id: _op.tokenIds[i],
274: amount: _op.amounts[i],
File: packages/protocol/contracts/tokenvault/ERC721Vault.sol
35: if (_op.amounts[i] != 0) revert VAULT_INVALID_AMOUNT();
171: IERC721(token_).safeTransferFrom(address(this), _to, _tokenIds[i]);
176: BridgedERC721(token_).mint(_to, _tokenIds[i]);
198: BridgedERC721(_op.token).burn(_user, _op.tokenIds[i]);
211: t.safeTransferFrom(_user, address(this), _op.tokenIds[i]);
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
105: uint256 idx = _ids[i];
107: if (instances[idx].addr == address(0)) revert SGX_INVALID_INSTANCE();
109: emit InstanceDeleted(idx, instances[idx].addr);
111: delete instances[idx];
211: if (addressRegistered[_instances[i]]) revert SGX_ALREADY_ATTESTED();
213: addressRegistered[_instances[i]] = true;
215: if (_instances[i] == address(0)) revert SGX_INVALID_INSTANCE();
217: instances[nextInstanceId] = Instance(_instances[i], validSince);
218: ids[i] = nextInstanceId;
220: emit InstanceAdded(nextInstanceId, _instances[i], address(0), validSince);
[105, 107, 109, 111, 211, 213, 215, 217, 218, 220]
File: packages/protocol/contracts/automata-attestation/lib/PEMCertChainLib.sol
62: (success, certs[i], increment) = _removeHeadersAndFooters(input);
245: contentStr = LibString.concat(contentStr, split[i]);
367: cpusvns[i] = cpusvn;
File: packages/protocol/contracts/automata-attestation/utils/BytesUtils.sol
334: bytes1 char = self[off + i];
336: decoded = uint8(BASE32_HEX_TABLE[uint256(uint8(char)) - 0x30]);
File: packages/protocol/contracts/automata-attestation/utils/RsaVerify.sol
141: if (decipher[i] != 0xff) {
153: if (decipher[3 + paddingLen + i] != bytes1(sha256ExplicitNullParam[i])) {
159: if (decipher[3 + paddingLen + i] != bytes1(sha256ImplicitNullParam[i])) {
175: if (decipher[5 + paddingLen + digestAlgoWithParamLen + i] != _sha256[i]) {
274: if (decipher[i] != 0xff) {
284: if (decipher[3 + paddingLen + i] != bytes1(sha1Prefix[i])) {
291: if (decipher[3 + paddingLen + sha1Prefix.length + i] != _sha1[i]) {
[141, 153, 159, 175, 274, 284, 291]
File: packages/protocol/contracts/automata-attestation/utils/X509DateUtils.sol
60: timestamp += uint256(monthDays[i - 1]) * 86_400; // Days in seconds
[60]
File: packages/protocol/contracts/L1/hooks/AssignmentHook.sol
173: if (_tierFees[i].tier == _tierId) return _tierFees[i].fee;
[173]
File: packages/protocol/contracts/L1/libs/LibDepositing.sol
87: uint256 data = _state.ethDeposits[j % _config.ethDepositRingBufferSize];
88: deposits_[i] = TaikoData.EthDeposit({
93: uint96 _fee = deposits_[i].amount > fee ? fee : deposits_[i].amount;
101: deposits_[i].amount -= _fee;
File: packages/protocol/contracts/L1/libs/LibProposing.sol
245: if (uint160(prevHook) >= uint160(params.hookCalls[i].hook)) {
253: IHook(params.hookCalls[i].hook).onBlockProposed{ value: address(this).balance }(
254: blk, meta_, params.hookCalls[i].data
257: prevHook = params.hookCalls[i].hook;
File: packages/protocol/contracts/L1/provers/Guardians.sol
75: delete guardianIds[guardians[i]];
81: address guardian = _newGuardians[i];
84: if (guardianIds[guardian] != 0) revert INVALID_GUARDIAN_SET();
88: guardianIds[guardian] = guardians.length;
File: packages/protocol/contracts/team/airdrop/ERC721Airdrop.sol
60: IERC721(token).safeTransferFrom(vault, user, tokenIds[i]);
[60]
File: packages/protocol/contracts/automata-attestation/lib/QuoteV3Auth/V3Parser.sol
154: uint256 digits = uint256(uint8(bytes1(encoded[i])));
282: quoteCerts[i] = Base64.decode(string(quoteCerts[i]));
File: packages/protocol/contracts/thirdparty/optimism/rlp/RLPWriter.sol
47: out_[i] = bytes1(uint8((_len / (256 ** (lenLen - i))) % 256));
60: if (b[i] != 0) {
67: out_[j] = b[i++];
File: packages/protocol/contracts/thirdparty/optimism/trie/MerkleTrie.sol
86: TrieNode memory currentNode = proof[i];
118: value_ = RLPReader.readBytes(currentNode.decoded[TREE_RADIX]);
134: uint8 branchKey = uint8(key[currentKeyIndex]);
135: RLPReader.RLPItem memory nextNode = currentNode.decoded[branchKey];
141: uint8 prefix = uint8(path[0]);
171: value_ = RLPReader.readBytes(currentNode.decoded[1]);
188: currentNodeID = _getNodeID(currentNode.decoded[1]);
209: proof_[i] = TrieNode({ encoded: _proof[i], decoded: RLPReader.readList(_proof[i]) });
244: for (; shared_ < max && _a[shared_] == _b[shared_];) {
[G-02] State variables can be packed into fewer storage slots
Each slot saved can avoid an extra Gsset (20000 gas). Reads and writes (if two variables that occupy the same slot are written by the same function) will have a cheaper gas consumption.
There are 2 instances of this issue.
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
// @audit Can save 1 storage slot (from 7 to 6)
// @audit Consider using the following order:
/*
* mapping(bytes32 => bool) _trustedUserMrEnclave (32)
* mapping(bytes32 => bool) _trustedUserMrSigner (32)
* mapping(uint256 => mapping(bytes => bool)) _serialNumIsRevoked (32)
* mapping(string => TCBInfoStruct.TCBInfo) tcbInfo (32)
* EnclaveIdStruct.EnclaveId qeIdentity (20)
* bool _checkLocalEnclaveReport (1)
* address owner (20)
*/
22: contract AutomataDcapV3Attestation is IAttestation {
[22]
File: packages/protocol/contracts/team/airdrop/ERC20Airdrop2.sol
// @audit Can save 1 storage slot (from 6 to 5)
// @audit Consider using the following order:
/*
* mapping(address => uint256) claimedAmount (32)
* mapping(address => uint256) withdrawnAmount (32)
* uint256[] __gap (32)
* address token (20)
* uint64 withdrawalWindow (8)
* address vault (20)
*/
12: contract ERC20Airdrop2 is MerkleClaimable {
[12]
[G-03] Structs can be packed into fewer storage slots
Each slot saved can avoid an extra Gsset (20000 gas) for the first setting of the struct. Subsequent reads as well as writes have smaller gas savings.
There are 3 instances of this issue.
Expand findings
File: packages/protocol/contracts/L1/TaikoData.sol
// @audit Can save 1 storage slot (from 8 to 7)
// @audit Consider using the following order:
/*
* uint256 ethDepositRingBufferSize (32)
* uint256 ethDepositGas (32)
* uint256 ethDepositMaxFee (32)
* uint96 livenessBond (12)
* uint96 ethDepositMinAmount (12)
* uint64 chainId (8)
* uint96 ethDepositMaxAmount (12)
* uint64 blockMaxProposals (8)
* uint64 blockRingBufferSize (8)
* uint32 blockMaxGasLimit (4)
* uint64 maxBlocksToVerifyPerProposal (8)
* uint64 ethDepositMinCountPerBlock (8)
* uint64 ethDepositMaxCountPerBlock (8)
* uint24 blockMaxTxListBytes (3)
* uint24 blobExpiry (3)
* bool blobAllowedForDA (1)
* bool blobReuseEnabled (1)
* uint8 blockSyncThreshold (1)
*/
10: struct Config {
11: // ---------------------------------------------------------------------
12: // Group 1: General configs
13: // ---------------------------------------------------------------------
14: // The chain ID of the network where Taiko contracts are deployed.
15: uint64 chainId;
16: // ---------------------------------------------------------------------
17: // Group 2: Block level configs
18: // ---------------------------------------------------------------------
19: // The maximum number of proposals allowed in a single block.
20: uint64 blockMaxProposals;
21: // Size of the block ring buffer, allowing extra space for proposals.
22: uint64 blockRingBufferSize;
23: // The maximum number of verifications allowed when a block is proposed.
24: uint64 maxBlocksToVerifyPerProposal;
25: // The maximum gas limit allowed for a block.
26: uint32 blockMaxGasLimit;
27: // The maximum allowed bytes for the proposed transaction list calldata.
28: uint24 blockMaxTxListBytes;
29: // The max period in seconds that a blob can be reused for DA.
30: uint24 blobExpiry;
31: // True if EIP-4844 is enabled for DA
32: bool blobAllowedForDA;
33: // True if blob can be reused
34: bool blobReuseEnabled;
35: // ---------------------------------------------------------------------
36: // Group 3: Proof related configs
37: // ---------------------------------------------------------------------
38: // The amount of Taiko token as a prover liveness bond
39: uint96 livenessBond;
40: // ---------------------------------------------------------------------
41: // Group 4: ETH deposit related configs
42: // ---------------------------------------------------------------------
43: // The size of the ETH deposit ring buffer.
44: uint256 ethDepositRingBufferSize;
45: // The minimum number of ETH deposits allowed per block.
46: uint64 ethDepositMinCountPerBlock;
47: // The maximum number of ETH deposits allowed per block.
48: uint64 ethDepositMaxCountPerBlock;
49: // The minimum amount of ETH required for a deposit.
50: uint96 ethDepositMinAmount;
51: // The maximum amount of ETH allowed for a deposit.
52: uint96 ethDepositMaxAmount;
53: // The gas cost for processing an ETH deposit.
54: uint256 ethDepositGas;
55: // The maximum fee allowed for an ETH deposit.
56: uint256 ethDepositMaxFee;
57: // The max number of L2 blocks that can stay unsynced on L1 (a value of zero disables
58: // syncing)
59: uint8 blockSyncThreshold;
60: }
// @audit Can save 1 storage slot (from 7 to 6)
// @audit Consider using the following order:
/*
* bytes32 extraData (32)
* bytes32 blobHash (32)
* bytes32 parentMetaHash (32)
* HookCall[] hookCalls (32)
* address assignedProver (20)
* uint24 txListByteOffset (3)
* uint24 txListByteSize (3)
* bool cacheBlobForReuse (1)
* address coinbase (20)
*/
78: struct BlockParams {
79: address assignedProver;
80: address coinbase;
81: bytes32 extraData;
82: bytes32 blobHash;
83: uint24 txListByteOffset;
84: uint24 txListByteSize;
85: bool cacheBlobForReuse;
86: bytes32 parentMetaHash;
87: HookCall[] hookCalls;
88: }
File: packages/protocol/contracts/signal/ISignalService.sol
// @audit Can save 1 storage slot (from 5 to 4)
// @audit Consider using the following order:
/*
* bytes32 rootHash (32)
* bytes[] accountProof (32)
* bytes[] storageProof (32)
* uint64 chainId (8)
* uint64 blockId (8)
* CacheOption cacheOption (1)
*/
20: struct HopProof {
21: uint64 chainId;
22: uint64 blockId;
23: bytes32 rootHash;
24: CacheOption cacheOption;
25: bytes[] accountProof;
26: bytes[] storageProof;
27: }
[20-27]
[G-04] Multiple mapping
s that share an ID can be combined into a single mapping
of ID / struct
This can avoid a Gsset (20000 Gas) per mapping combined. Reads and writes will also be cheaper when a function requires both values as they both can fit in the same storage slot.
Finally, if both fields are accessed in the same function, this can save ~42 gas per access due to not having to recalculate the key’s keccak256
hash (Gkeccak256 - 30 Gas) and that calculation’s associated stack operations.
There is 1 instance of this issue.
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
// @audit consider merging _trustedUserMrEnclave, _trustedUserMrSigner
39: mapping(bytes32 enclave => bool trusted) private _trustedUserMrEnclave;
40: mapping(bytes32 signer => bool trusted) private _trustedUserMrSigner;
[39]
[G-05] Use of memory
instead of storage
for struct/array state variables
When fetching data from storage
, using the memory
keyword to assign it to a variable reads all fields of the struct/array and incurs a Gcoldsload (2100 gas) for each field.
This can be avoided by declaring the variable with the storage
keyword and caching the necessary fields in stack variables.
The memory
keyword should only be used if the entire struct/array is being returned or passed to a function that requires memory
, or if it is being read from another memory
array/struct.
There are 2 instances of this issue.
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
180: EnclaveIdStruct.EnclaveId memory enclaveId = qeIdentity;
[180]
File: packages/protocol/contracts/tokenvault/ERC20Vault.sol
171: CanonicalERC20 memory ctoken = bridgedToCanonical[btokenOld_];
[171]
[G-06] State variables access within a loop
State variable reads and writes are more expensive than local variable reads and writes. Therefore, it is recommended to replace state variable reads and writes within loops with a local variable. Gas savings should be multiplied by the average loop length.
There are 4 instances of this issue.
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
// @audit nextInstanceId
218: ids[i] = nextInstanceId;
[218]
File: packages/protocol/contracts/L1/provers/Guardians.sol
// @audit guardians
87: guardians.push(guardian);
// @audit guardians
88: guardianIds[guardian] = guardians.length;
// @audit minGuardians
135: if (count == minGuardians) return true;
[G-07] Unused non-constant state variables waste gas
If the variable is assigned a non-zero value, saves Gsset (20000 gas). If it’s assigned a zero value, saves Gsreset (2900 gas).
If the variable remains unassigned, there is no gas savings unless the variable is public, in which case the compiler-generated non-payable getter deployment cost is saved.
If the state variable is overriding an interface’s public function, mark the variable as constant or immutable so that it does not use a storage slot.
There are 2 instances of this issue.
File: packages/protocol/contracts/tokenvault/BaseNFTVault.sol
56: mapping(address btoken => CanonicalNFT canonical) public bridgedToCanonical;
59: mapping(uint256 chainId => mapping(address ctoken => address btoken)) public canonicalToBridged;
[G-08] State variables only set in the constructor should be declared immutable
Accessing state variables within a function involves an SLOAD operation, but immutable
variables can be accessed directly without the need of it, thus reducing gas costs. As these state variables are assigned only in the constructor, consider declaring them immutable
.
There are 2 instances of this issue.
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
52: address public owner;
[52]
File: packages/protocol/contracts/automata-attestation/utils/SigVerifyLib.sol
18: address private ES256VERIFIER;
[18]
[G-09] Cache state variables with stack variables
Caching of a state variable replaces each Gwarmaccess (100 gas) with a cheaper stack read. Other less obvious fixes/optimizations include having local memory caches of state variable structs, or having local caches of state variable contracts/addresses.
There are 17 instances of this issue.
Expand findings
File: packages/protocol/contracts/common/AddressResolver.sol
// @audit addressManager on line 83
81: if (addressManager == address(0)) revert RESOLVER_INVALID_MANAGER();
[81]
File: packages/protocol/contracts/L1/TaikoL1.sol
// @audit state on lines 67, 70
69: if (!state.slotB.provingPaused) {
// @audit state on line 94
96: LibVerifying.verifyBlocks(state, config, this, maxBlocksToVerify);
// @audit state on line 151
154: ts_ = state.transitions[slot][blk_.verifiedTransitionId];
// @audit state on line 181
182: b_ = state.slotB;
File: packages/protocol/contracts/L2/CrossChainOwned.sol
// @audit nextTxId on line 53
43: if (txId != nextTxId) revert XCO_INVALID_TX_ID();
[43]
File: packages/protocol/contracts/L2/TaikoL2.sol
// @audit gasExcess on line 265
262: if (gasExcess > 0) {
// @audit lastSyncedBlock on line 275
276: numL1Blocks = _l1BlockId - lastSyncedBlock;
File: packages/protocol/contracts/team/TimelockTokenPool.sol
// @audit sharedVault on line 219
220: IERC20(costToken).safeTransferFrom(_recipient, sharedVault, costToWithdraw);
[220]
File: packages/protocol/contracts/tokenvault/BridgedERC20Base.sol
// @audit migratingAddress on line 63
61: if (msg.sender == migratingAddress) {
// @audit migratingAddress on line 80
80: emit MigratedTo(migratingAddress, _account, _amount);
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
// @audit nextInstanceId on lines 218, 220, 222
217: instances[nextInstanceId] = Instance(_instances[i], validSince);
[217]
File: packages/protocol/contracts/L1/provers/Guardians.sol
// @audit version on line 95
92: ++version;
// @audit guardians on lines 74, 87, 88
77: delete guardians;
// @audit version on line 116
119: uint256 _approval = _approvals[version][_hash];
File: packages/protocol/contracts/team/airdrop/ERC20Airdrop.sol
// @audit token on line 63
71: IVotes(token).delegateBySig(delegatee, nonce, expiry, v, r, s);
[71]
File: packages/protocol/contracts/tokenvault/adapters/USDCAdapter.sol
// @audit usdc on line 48
49: usdc.burn(_amount);
[49]
[G-10] Modifiers order should be changed
According to solidity docs, modifiers are executed in the order they are presented, so the order can be improved to avoid unnecessary SLOAD
s and/or external calls by prioritizing cheaper modifiers first.
There are 5 instances of this issue.
File: packages/protocol/contracts/bridge/Bridge.sol
155: function recallMessage(
156: Message calldata _message,
157: bytes calldata _proof
158: )
159: external
160: nonReentrant
161: whenNotPaused
162: sameChain(_message.srcChainId)
217: function processMessage(
218: Message calldata _message,
219: bytes calldata _proof
220: )
221: external
222: nonReentrant
223: whenNotPaused
224: sameChain(_message.destChainId)
310: function retryMessage(
311: Message calldata _message,
312: bool _isLastAttempt
313: )
314: external
315: nonReentrant
316: whenNotPaused
317: sameChain(_message.destChainId)
File: packages/protocol/contracts/L1/TaikoL1.sol
75: function proveBlock(
76: uint64 _blockId,
77: bytes calldata _input
78: )
79: external
80: nonReentrant
81: whenNotPaused
82: whenProvingNotPaused
100: function verifyBlocks(uint64 _maxBlocksToVerify)
101: external
102: nonReentrant
103: whenNotPaused
104: whenProvingNotPaused
[G-11] Low level call
can be optimized with assembly
returnData
is copied to memory even if the variable is not utilized: the proper way to handle this is through a low level assembly call.
// before
(bool success,) = payable(receiver).call{gas: gas, value: value}("");
//after
bool success;
assembly {
success := call(gas, receiver, value, 0, 0, 0, 0)
}
There is 1 instance of this issue.
File: packages/protocol/contracts/L2/CrossChainOwned.sol
50: (bool success,) = address(this).call(txdata);
[50]
[G-12] Use of memory
instead of calldata
for immutable arguments
Consider refactoring the function arguments from memory
to calldata
when they are immutable, as calldata
is cheaper.
There are 116 instances of this issue.
Expand findings
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
// @audit quoteEnclaveReport
175: function _verifyQEReportWithIdentity(V3Struct.EnclaveReport memory quoteEnclaveReport)
// @audit pck, tcb
206: function _checkTcbLevels(
207: IPEMCertChainLib.PCKCertificateField memory pck,
208: TCBInfoStruct.TCBInfo memory tcb
// @audit pckCpuSvns, tcbCpuSvns
229: function _isCpuSvnHigherOrGreater(
230: uint256[] memory pckCpuSvns,
231: uint8[] memory tcbCpuSvns
// @audit certs
248: function _verifyCertChain(IPEMCertChainLib.ECSha256Certificate[] memory certs)
// @audit pckCertPubKey, signedQuoteData, authDataV3, qeEnclaveReport
303: function _enclaveReportSigVerification(
304: bytes memory pckCertPubKey,
305: bytes memory signedQuoteData,
306: V3Struct.ECDSAQuoteV3AuthData memory authDataV3,
307: V3Struct.EnclaveReport memory qeEnclaveReport
[175, 206-208, 229-231, 248, 303-307]
File: packages/protocol/contracts/L2/TaikoL2.sol
// @audit _config
252: function _calc1559BaseFee(
253: Config memory _config,
254: uint64 _l1BlockId,
255: uint32 _parentGasUsed
[252-255]
File: packages/protocol/contracts/libs/LibAddress.sol
// @audit _sig
61: function isValidSignature(
62: address _addr,
63: bytes32 _hash,
64: bytes memory _sig
[61-64]
File: packages/protocol/contracts/signal/SignalService.sol
// @audit _hop
206: function _verifyHopProof(
207: uint64 _chainId,
208: address _app,
209: bytes32 _signal,
210: bytes32 _value,
211: HopProof memory _hop,
212: address _signalService
// @audit _hop
271: function _cacheChainData(
272: HopProof memory _hop,
273: uint64 _chainId,
274: uint64 _blockId,
275: bytes32 _signalRoot,
276: bool _isFullProof,
277: bool _isLastHop
File: packages/protocol/contracts/team/TimelockTokenPool.sol
// @audit _sig
168: function withdraw(address _to, bytes memory _sig) external {
// @audit _grant
235: function _getAmountOwned(Grant memory _grant) private view returns (uint128) {
// @audit _grant
267: function _validateGrant(Grant memory _grant) private pure {
File: packages/protocol/contracts/tokenvault/BridgedERC1155.sol
// @audit _symbol, _name
38: function init(
39: address _owner,
40: address _addressManager,
41: address _srcToken,
42: uint256 _srcChainId,
43: string memory _symbol,
44: string memory _name
[38-44]
File: packages/protocol/contracts/tokenvault/BridgedERC20.sol
// @audit _symbol, _name
52: function init(
53: address _owner,
54: address _addressManager,
55: address _srcToken,
56: uint256 _srcChainId,
57: uint8 _decimals,
58: string memory _symbol,
59: string memory _name
[52-59]
File: packages/protocol/contracts/tokenvault/BridgedERC721.sol
// @audit _symbol, _name
31: function init(
32: address _owner,
33: address _addressManager,
34: address _srcToken,
35: uint256 _srcChainId,
36: string memory _symbol,
37: string memory _name
[31-37]
File: packages/protocol/contracts/tokenvault/ERC1155Vault.sol
// @audit _op
240: function _handleMessage(
241: address _user,
242: BridgeTransferOp memory _op
// @audit _ctoken
303: function _deployBridgedToken(CanonicalNFT memory _ctoken) private returns (address btoken_) {
File: packages/protocol/contracts/tokenvault/ERC20Vault.sol
// @audit ctoken
407: function _deployBridgedToken(CanonicalERC20 memory ctoken) private returns (address btoken) {
[407]
File: packages/protocol/contracts/tokenvault/ERC721Vault.sol
// @audit _tokenIds
160: function _transferTokens(
161: CanonicalNFT memory _ctoken,
162: address _to,
163: uint256[] memory _tokenIds
// @audit _op
187: function _handleMessage(
188: address _user,
189: BridgeTransferOp memory _op
// @audit _ctoken
240: function _deployBridgedToken(CanonicalNFT memory _ctoken) private returns (address btoken_) {
File: packages/protocol/contracts/tokenvault/LibBridgedToken.sol
// @audit _symbol, _name
11: function validateInputs(
12: address _srcToken,
13: uint256 _srcChainId,
14: string memory _symbol,
15: string memory _name
// @audit _name
28: function buildName(
29: string memory _name,
30: uint256 _srcChainId
// @audit _symbol
39: function buildSymbol(string memory _symbol) internal pure returns (string memory) {
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
// @audit _instances
195: function _addInstances(
196: address[] memory _instances,
197: bool instantValid
[195-197]
File: packages/protocol/contracts/automata-attestation/lib/PEMCertChainLib.sol
// @audit pemChain
40: function splitCertificateChain(
41: bytes memory pemChain,
42: uint256 size
// @audit der
74: function decodeCert(
75: bytes memory der,
76: bool isPckCert
// @audit pemData
216: function _removeHeadersAndFooters(string memory pemData)
// @audit input
252: function _trimBytes(
253: bytes memory input,
254: uint256 expectedLength
// @audit der
269: function _findPckTcbInfo(
270: bytes memory der,
271: uint256 tbsPtr,
272: uint256 tbsParentPtr
// @audit der
341: function _findTcb(
342: bytes memory der,
343: uint256 oidPtr
[40-42, 74-76, 216, 252-254, 269-272, 341-343]
File: packages/protocol/contracts/automata-attestation/utils/Asn1Decode.sol
// @audit der
47: function root(bytes memory der) internal pure returns (uint256) {
// @audit der
56: function rootOfBitStringAt(bytes memory der, uint256 ptr) internal pure returns (uint256) {
// @audit der
66: function rootOfOctetStringAt(bytes memory der, uint256 ptr) internal pure returns (uint256) {
// @audit der
77: function nextSiblingOf(bytes memory der, uint256 ptr) internal pure returns (uint256) {
// @audit der
87: function firstChildOf(bytes memory der, uint256 ptr) internal pure returns (uint256) {
// @audit der
111: function bytesAt(bytes memory der, uint256 ptr) internal pure returns (bytes memory) {
// @audit der
121: function allBytesAt(bytes memory der, uint256 ptr) internal pure returns (bytes memory) {
// @audit der
131: function bytes32At(bytes memory der, uint256 ptr) internal pure returns (bytes32) {
// @audit der
141: function uintAt(bytes memory der, uint256 ptr) internal pure returns (uint256) {
// @audit der
154: function uintBytesAt(bytes memory der, uint256 ptr) internal pure returns (bytes memory) {
// @audit der
165: function keccakOfBytesAt(bytes memory der, uint256 ptr) internal pure returns (bytes32) {
// @audit der
169: function keccakOfAllBytesAt(bytes memory der, uint256 ptr) internal pure returns (bytes32) {
// @audit der
179: function bitstringAt(bytes memory der, uint256 ptr) internal pure returns (bytes memory) {
// @audit der
187: function _readNodeLength(bytes memory der, uint256 ix) private pure returns (uint256) {
[47, 56, 66, 77, 87, 111, 121, 131, 141, 154, 165, 169, 179, 187]
File: packages/protocol/contracts/automata-attestation/utils/BytesUtils.sol
// @audit self
16: function keccak(
17: bytes memory self,
18: uint256 offset,
19: uint256 len
// @audit self, other
39: function compare(bytes memory self, bytes memory other) internal pure returns (int256) {
// @audit self, other
56: function compare(
57: bytes memory self,
58: uint256 offset,
59: uint256 len,
60: bytes memory other,
61: uint256 otheroffset,
62: uint256 otherlen
// @audit self, other
116: function equals(
117: bytes memory self,
118: uint256 offset,
119: bytes memory other,
120: uint256 otherOffset,
121: uint256 len
// @audit self, other
138: function equals(
139: bytes memory self,
140: uint256 offset,
141: bytes memory other,
142: uint256 otherOffset
// @audit self, other
160: function equals(
161: bytes memory self,
162: uint256 offset,
163: bytes memory other
// @audit self, other
178: function equals(bytes memory self, bytes memory other) internal pure returns (bool) {
// @audit self
188: function readUint8(bytes memory self, uint256 idx) internal pure returns (uint8 ret) {
// @audit self
198: function readUint16(bytes memory self, uint256 idx) internal pure returns (uint16 ret) {
// @audit self
211: function readUint32(bytes memory self, uint256 idx) internal pure returns (uint32 ret) {
// @audit self
224: function readBytes32(bytes memory self, uint256 idx) internal pure returns (bytes32 ret) {
// @audit self
237: function readBytes20(bytes memory self, uint256 idx) internal pure returns (bytes20 ret) {
// @audit self
255: function readBytesN(
256: bytes memory self,
257: uint256 idx,
258: uint256 len
// @audit self
284: function substring(
285: bytes memory self,
286: uint256 offset,
287: uint256 len
// @audit self
320: function base32HexDecodeWord(
321: bytes memory self,
322: uint256 off,
323: uint256 len
// @audit a, b
371: function compareBytes(bytes memory a, bytes memory b) internal pure returns (bool) {
[16-19, 39, 56-62, 116-121, 138-142, 160-163, 178, 188, 198, 211, 224, 237, 255-258, 284-287, 320-323, 371]
File: packages/protocol/contracts/automata-attestation/utils/RsaVerify.sol
// @audit _s, _e, _m
43: function pkcs1Sha256(
44: bytes32 _sha256,
45: bytes memory _s,
46: bytes memory _e,
47: bytes memory _m
// @audit _data, _s, _e, _m
191: function pkcs1Sha256Raw(
192: bytes memory _data,
193: bytes memory _s,
194: bytes memory _e,
195: bytes memory _m
// @audit _s, _e, _m
212: function pkcs1Sha1(
213: bytes20 _sha1,
214: bytes memory _s,
215: bytes memory _e,
216: bytes memory _m
// @audit _data, _s, _e, _m
307: function pkcs1Sha1Raw(
308: bytes memory _data,
309: bytes memory _s,
310: bytes memory _e,
311: bytes memory _m
[43-47, 191-195, 212-216, 307-311]
File: packages/protocol/contracts/automata-attestation/utils/SHA1.sol
// @audit data
11: function sha1(bytes memory data) internal pure returns (bytes20 ret) {
[11]
File: packages/protocol/contracts/automata-attestation/utils/SigVerifyLib.sol
// @audit tbs, signature, publicKey
24: function verifyAttStmtSignature(
25: bytes memory tbs,
26: bytes memory signature,
27: PublicKey memory publicKey,
28: Algorithm alg
// @audit tbs, signature, publicKey
54: function verifyCertificateSignature(
55: bytes memory tbs,
56: bytes memory signature,
57: PublicKey memory publicKey,
58: CertSigAlgorithm alg
// @audit tbs, signature, publicKey
79: function verifyRS256Signature(
80: bytes memory tbs,
81: bytes memory signature,
82: bytes memory publicKey
// @audit tbs, signature, publicKey
96: function verifyRS1Signature(
97: bytes memory tbs,
98: bytes memory signature,
99: bytes memory publicKey
// @audit tbs, signature, publicKey
113: function verifyES256Signature(
114: bytes memory tbs,
115: bytes memory signature,
116: bytes memory publicKey
[24-28, 54-58, 79-82, 96-99, 113-116]
File: packages/protocol/contracts/automata-attestation/utils/X509DateUtils.sol
// @audit x509Time
8: function toTimestamp(bytes memory x509Time) internal pure returns (uint256) {
[8]
File: packages/protocol/contracts/L1/gov/TaikoGovernor.sol
// @audit _description
48: function propose(
49: address[] memory _targets,
50: uint256[] memory _values,
51: bytes[] memory _calldatas,
52: string memory _description
// @audit _description
69: function propose(
70: address[] memory _targets,
71: uint256[] memory _values,
72: string[] memory _signatures,
73: bytes[] memory _calldatas,
74: string memory _description
File: packages/protocol/contracts/L1/hooks/AssignmentHook.sol
// @audit _blk, _data
62: function onBlockProposed(
63: TaikoData.Block memory _blk,
64: TaikoData.BlockMetadata memory _meta,
65: bytes memory _data
// @audit _assignment
137: function hashAssignment(
138: ProverAssignment memory _assignment,
139: address _taikoL1Address,
140: bytes32 _blobHash
// @audit _tierFees
164: function _getProverFee(
165: TaikoData.TierFee[] memory _tierFees,
166: uint16 _tierId
File: packages/protocol/contracts/L1/libs/LibDepositing.sol
// @audit _config
67: function processDeposits(
68: TaikoData.State storage _state,
69: TaikoData.Config memory _config,
70: address _feeRecipient
// @audit _config
122: function canDepositEthToL2(
123: TaikoData.State storage _state,
124: TaikoData.Config memory _config,
125: uint256 _amount
File: packages/protocol/contracts/L1/libs/LibProposing.sol
// @audit _config
287: function isBlobReusable(
288: TaikoData.State storage _state,
289: TaikoData.Config memory _config,
290: bytes32 _blobHash
// @audit _slotB
299: function _isProposerPermitted(
300: TaikoData.SlotB memory _slotB,
301: IAddressResolver _resolver
File: packages/protocol/contracts/L1/libs/LibProving.sol
// @audit _config
91: function proveBlock(
92: TaikoData.State storage _state,
93: TaikoData.Config memory _config,
94: IAddressResolver _resolver,
95: TaikoData.BlockMetadata memory _meta,
96: TaikoData.Transition memory _tran,
97: TaikoData.TierProof memory _proof
// @audit _tran
269: function _createTransition(
270: TaikoData.State storage _state,
271: TaikoData.Block storage _blk,
272: TaikoData.Transition memory _tran,
273: uint64 slot
// @audit _tran, _proof, _tier
350: function _overrideWithHigherProof(
351: TaikoData.TransitionState storage _ts,
352: TaikoData.Transition memory _tran,
353: TaikoData.TierProof memory _proof,
354: ITierProvider.Tier memory _tier,
355: IERC20 _tko,
356: bool _sameTransition
// @audit _tier
401: function _checkProverPermission(
402: TaikoData.State storage _state,
403: TaikoData.Block storage _blk,
404: TaikoData.TransitionState storage _ts,
405: uint32 _tid,
406: ITierProvider.Tier memory _tier
[91-97, 269-273, 350-356, 401-406]
File: packages/protocol/contracts/L1/libs/LibUtils.sol
// @audit _config
23: function getTransition(
24: TaikoData.State storage _state,
25: TaikoData.Config memory _config,
26: uint64 _blockId,
27: bytes32 _parentHash
// @audit _config
52: function getBlock(
53: TaikoData.State storage _state,
54: TaikoData.Config memory _config,
55: uint64 _blockId
File: packages/protocol/contracts/L1/libs/LibVerifying.sol
// @audit _config
224: function _syncChainData(
225: TaikoData.Config memory _config,
226: IAddressResolver _resolver,
227: uint64 _lastVerifiedBlockId,
228: bytes32 _stateRoot
// @audit _config
245: function _isConfigValid(TaikoData.Config memory _config) private view returns (bool) {
File: packages/protocol/contracts/thirdparty/nomad-xyz/ExcessivelySafeCall.sol
// @audit _calldata
25: function excessivelySafeCall(
26: address _target,
27: uint256 _gas,
28: uint256 _value,
29: uint16 _maxCopy,
30: bytes memory _calldata
[25-30]
File: packages/protocol/contracts/thirdparty/optimism/Bytes.sol
// @audit _bytes
15: function slice(
16: bytes memory _bytes,
17: uint256 _start,
18: uint256 _length
// @audit _bytes
91: function slice(bytes memory _bytes, uint256 _start) internal pure returns (bytes memory) {
// @audit _bytes
102: function toNibbles(bytes memory _bytes) internal pure returns (bytes memory) {
// @audit _bytes, _other
149: function equal(bytes memory _bytes, bytes memory _other) internal pure returns (bool) {
File: packages/protocol/contracts/automata-attestation/lib/QuoteV3Auth/V3Parser.sol
// @audit quote
21: function parseInput(
22: bytes memory quote,
23: address pemCertLibAddr
// @audit v3Quote
62: function validateParsedInput(V3Struct.ParsedV3QuoteStruct memory v3Quote)
// @audit rawEnclaveReport
133: function parseEnclaveReport(bytes memory rawEnclaveReport)
// @audit encoded
152: function littleEndianDecode(bytes memory encoded) private pure returns (uint256 decoded) {
// @audit rawHeader
165: function parseAndVerifyHeader(bytes memory rawHeader)
// @audit rawAuthData
203: function parseAuthDataAndVerifyCertType(
204: bytes memory rawAuthData,
205: address pemCertLibAddr
// @audit enclaveReport
244: function packQEReport(V3Struct.EnclaveReport memory enclaveReport)
// @audit certBytes
267: function parseCerificationChainBytes(
268: bytes memory certBytes,
269: address pemCertLibAddr
[21-23, 62, 133, 152, 165, 203-205, 244, 267-269]
File: packages/protocol/contracts/thirdparty/optimism/rlp/RLPReader.sol
// @audit _in
35: function toRLPItem(bytes memory _in) internal pure returns (RLPItem memory out_) {
// @audit _in
102: function readList(bytes memory _in) internal pure returns (RLPItem[] memory out_) {
// @audit _in
128: function readBytes(bytes memory _in) internal pure returns (bytes memory out_) {
// @audit _in
135: function readRawBytes(RLPItem memory _in) internal pure returns (bytes memory out_) {
// @audit _in
144: function _decodeLength(RLPItem memory _in)
File: packages/protocol/contracts/thirdparty/optimism/rlp/RLPWriter.sol
// @audit _in
13: function writeBytes(bytes memory _in) internal pure returns (bytes memory out_) {
[13]
File: packages/protocol/contracts/thirdparty/optimism/trie/MerkleTrie.sol
// @audit _key, _value
50: function verifyInclusionProof(
51: bytes memory _key,
52: bytes memory _value,
53: bytes[] memory _proof,
54: bytes32 _root
// @audit _key
68: function get(
69: bytes memory _key,
70: bytes[] memory _proof,
71: bytes32 _root
// @audit _proof
205: function _parseProof(bytes[] memory _proof) private pure returns (TrieNode[] memory proof_) {
// @audit _node
227: function _getNodePath(TrieNode memory _node) private pure returns (bytes memory nibbles_) {
// @audit _a, _b
235: function _getSharedNibbleLength(
236: bytes memory _a,
237: bytes memory _b
[50-54, 68-71, 205, 227, 235-237]
File: packages/protocol/contracts/thirdparty/optimism/trie/SecureMerkleTrie.sol
// @audit _key, _value
19: function verifyInclusionProof(
20: bytes memory _key,
21: bytes memory _value,
22: bytes[] memory _proof,
23: bytes32 _root
// @audit _key
38: function get(
39: bytes memory _key,
40: bytes[] memory _proof,
41: bytes32 _root
// @audit _key
54: function _getSecureKey(bytes memory _key) private pure returns (bytes memory hash_) {
[G-13] Avoid updating storage when the value hasn’t changed
If the old value is equal to the new value, not re-storing the value will avoid a Gsreset (2900 gas), potentially at the expense of a Gcoldsload (2100 gas) or a Gwarmaccess (100 gas)
There are 12 instances of this issue.
Expand findings
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
// @audit _trustedUserMrSigner
65: function setMrSigner(bytes32 _mrSigner, bool _trusted) external onlyOwner {
// @audit _trustedUserMrEnclave
69: function setMrEnclave(bytes32 _mrEnclave, bool _trusted) external onlyOwner {
// @audit tcbInfo
103: function configureTcbInfoJson(
// @audit qeIdentity
114: function configureQeIdentityJson(EnclaveIdStruct.EnclaveId calldata qeIdentityInput)
File: packages/protocol/contracts/bridge/Bridge.sol
// @audit messageStatus
515: function _updateMessageStatus(bytes32 _msgHash, Status _status) private {
[515]
File: packages/protocol/contracts/common/AddressManager.sol
// @audit __addresses
38: function setAddress(
[38]
File: packages/protocol/contracts/L2/TaikoL2EIP1559Configurable.sol
// @audit customConfig
25: function setConfigAndExcess(
[25]
File: packages/protocol/contracts/tokenvault/BridgedERC20.sol
// @audit snapshooter
80: function setSnapshoter(address _snapshooter) external onlyOwner {
[80]
File: packages/protocol/contracts/tokenvault/BridgedERC20Base.sol
// @audit migratingAddress, migratingInbound
36: function changeMigrationStatus(
[36]
File: packages/protocol/contracts/tokenvault/ERC20Vault.sol
// @audit btokenBlacklist, bridgedToCanonical, canonicalToBridged
148: function changeBridgedToken(
[148]
File: packages/protocol/contracts/L1/provers/Guardians.sol
// @audit guardianIds, minGuardians
53: function setGuardians(
[53]
File: packages/protocol/contracts/team/airdrop/MerkleClaimable.sol
// @audit claimStart, claimEnd, merkleRoot
90: function _setConfig(uint64 _claimStart, uint64 _claimEnd, bytes32 _merkleRoot) private {
[90]
[G-14] Use of emit
inside a loop
Emitting an event inside a loop performs a LOG
op N times, where N is the loop length. Consider refactoring the code to emit the event only once at the end of loop. Gas savings should be multiplied by the average loop length.
There are 4 instances of this issue.
File: packages/protocol/contracts/bridge/Bridge.sol
93: emit MessageSuspended(msgHash, _suspend);
[93]
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
109: emit InstanceDeleted(idx, instances[idx].addr);
220: emit InstanceAdded(nextInstanceId, _instances[i], address(0), validSince);
File: packages/protocol/contracts/L1/libs/LibVerifying.sol
198: emit BlockVerified({
199: blockId: blockId,
200: assignedProver: blk.assignedProver,
201: prover: ts.prover,
202: blockHash: blockHash,
203: stateRoot: stateRoot,
204: tier: ts.tier,
205: contestations: ts.contestations
206: });
[198-206]
[G-15] Use uint256(1)/uint256(2)
instead of true/false
to save gas for changes
Use uint256(1)
and uint256(2)
for true
/false
to avoid a Gsset (20000 gas) when changing from false
to true
, after having been true
in the past. See source.
There are 10 instances of this issue.
Expand findings
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
38: bool private _checkLocalEnclaveReport;
39: mapping(bytes32 enclave => bool trusted) private _trustedUserMrEnclave;
40: mapping(bytes32 signer => bool trusted) private _trustedUserMrSigner;
47: mapping(uint256 idx => mapping(bytes serialNum => bool revoked)) private _serialNumIsRevoked;
File: packages/protocol/contracts/bridge/Bridge.sol
42: mapping(address addr => bool banned) public addressBanned;
[42]
File: packages/protocol/contracts/signal/SignalService.sol
21: mapping(address addr => bool authorized) public isAuthorized;
[21]
File: packages/protocol/contracts/tokenvault/BridgedERC20Base.sol
14: bool public migratingInbound;
[14]
File: packages/protocol/contracts/tokenvault/ERC20Vault.sol
52: mapping(address btoken => bool blacklisted) public btokenBlacklist;
[52]
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
55: mapping(address instanceAddress => bool alreadyAttested) public addressRegistered;
[55]
File: packages/protocol/contracts/team/airdrop/MerkleClaimable.sol
12: mapping(bytes32 hash => bool claimed) public isClaimed;
[12]
[G-16] Shortcircuit rules can be be used to optimize some gas usage
Some conditions may be reordered to save an SLOAD
(2100 gas), as we avoid reading state variables when the first part of the condition fails (with &&
), or succeeds (with ||
).
There is 1 instance of this issue.
File: packages/protocol/contracts/L2/CrossChainOwned.sol
// @audit switch with this condition
// ctx.from != owner() || ctx.srcChainId != ownerChainId
46: if (ctx.srcChainId != ownerChainId || ctx.from != owner()) {
[46]
[G-17] Cache multiple accesses of a mapping/array
Consider using a local storage
or calldata
variable when accessing a mapping/array value multiple times.
This can be useful to avoid recalculating the mapping hash and/or the array offsets.
There are 13 instances of this issue.
Expand findings
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
// @audit _serialNumIsRevoked on line 81
84: _serialNumIsRevoked[index][serialNumBatch[i]] = true;
// @audit _serialNumIsRevoked[index] on line 96
99: delete _serialNumIsRevoked[index][serialNumBatch[i]];
// @audit _serialNumIsRevoked[uint256(IPEMCertChainLib.CRL.PCK)] on line 268
271: certRevoked = _serialNumIsRevoked[uint256(IPEMCertChainLib.CRL.PCK)][certs[i]
272: .serialNumber];
File: packages/protocol/contracts/bridge/Bridge.sol
// @audit proofReceipt on lines 168, 184
190: delete proofReceipt[msgHash];
[190]
File: packages/protocol/contracts/common/AddressManager.sol
// @audit __addresses on line 47
49: __addresses[_chainId][_name] = _newAddress;
[49]
File: packages/protocol/contracts/L1/TaikoL1.sol
// @audit state.transitions on line 154
154: ts_ = state.transitions[slot][blk_.verifiedTransitionId];
[154]
File: packages/protocol/contracts/signal/SignalService.sol
// @audit topBlockId on line 247
248: topBlockId[_chainId][_kind] = _blockId;
[248]
File: packages/protocol/contracts/team/TimelockTokenPool.sol
// @audit recipients on line 137
142: recipients[_recipient].grant = _grant;
[142]
File: packages/protocol/contracts/tokenvault/ERC20Vault.sol
// @audit canonicalToBridged on line 168
189: canonicalToBridged[_ctoken.chainId][_ctoken.addr] = _btokenNew;
// @audit bridgedToCanonical on line 358
359: ctoken_ = bridgedToCanonical[_token];
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
// @audit instances on lines 107, 109
111: delete instances[idx];
// @audit instances on lines 235, 236
237: && block.timestamp <= instances[id].validSince + INSTANCE_EXPIRY;
File: packages/protocol/contracts/L1/provers/Guardians.sol
// @audit _approvals on line 116
119: uint256 _approval = _approvals[version][_hash];
[119]
[G-18] Redundant state variable getters
Getters for public state variables are automatically generated with public variables, so there is no need to code them manually, as it adds an unnecessary overhead.
There are 2 instances of this issue.
File: packages/protocol/contracts/L2/TaikoL2.sol
200: function getBlockHash(uint64 _blockId) public view returns (bytes32) {
201: if (_blockId >= block.number) return 0;
202: if (_blockId + 256 >= block.number) return blockhash(_blockId);
203: return l2Hashes[_blockId];
204: }
[200-204]
File: packages/protocol/contracts/L2/TaikoL2EIP1559Configurable.sol
43: function getConfig() public view override returns (Config memory) {
44: return customConfig;
45: }
[43-45]
[G-19] Using private
for constants saves gas
Saves deployment gas due to the compiler not having to create non-payable getter functions for deployment calldata, not having to store the bytes of the value outside of where it’s used, and not adding another entry to the method ID table.
There are 14 instances of this issue.
Expand findings
File: packages/protocol/contracts/L2/TaikoL2.sol
35: uint8 public constant BLOCK_SYNC_THRESHOLD = 5;
[35]
File: packages/protocol/contracts/libs/Lib4844.sol
10: address public constant POINT_EVALUATION_PRECOMPILE_ADDRESS = address(0x0A);
13: uint32 public constant FIELD_ELEMENTS_PER_BLOB = 4096;
16: uint256 public constant BLS_MODULUS =
17: 52_435_875_175_126_190_479_447_740_508_185_965_837_690_552_500_527_637_822_603_658_699_938_581_184_513;
File: packages/protocol/contracts/tokenvault/BaseNFTVault.sol
47: bytes4 public constant ERC1155_INTERFACE_ID = 0xd9b67a26;
50: bytes4 public constant ERC721_INTERFACE_ID = 0x80ac58cd;
53: uint256 public constant MAX_TOKEN_PER_TXN = 10;
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
30: uint64 public constant INSTANCE_EXPIRY = 180 days;
34: uint64 public constant INSTANCE_VALIDITY_DELAY = 1 days;
File: packages/protocol/contracts/L1/hooks/AssignmentHook.sol
38: uint256 public constant MAX_GAS_PAYING_PROVER = 50_000;
[38]
File: packages/protocol/contracts/L1/libs/LibProposing.sol
21: uint256 public constant MAX_BYTES_PER_BLOB = 4096 * 32;
[21]
File: packages/protocol/contracts/L1/libs/LibProving.sol
20: bytes32 public constant RETURN_LIVENESS_BOND = keccak256("RETURN_LIVENESS_BOND");
23: bytes32 public constant TIER_OP = bytes32("tier_optimistic");
File: packages/protocol/contracts/L1/provers/Guardians.sol
11: uint256 public constant MIN_NUM_GUARDIANS = 5;
[11]
[G-20] require() or revert() statements that check input arguments should be at the top of the function
Checks that can be performed earlier should come before checks that involve state variables, function calls, and calculations. By doing these checks first, the function is able to revert before wasting a Gcoldsload (2100 gas) in a function that may ultimately revert.
There are 4 instances of this issue.
File: packages/protocol/contracts/common/EssentialContract.sol
// @audit expensive op on line 103
105: if (_addressManager == address(0)) revert ZERO_ADDR_MANAGER();
[105]
File: packages/protocol/contracts/team/TimelockTokenPool.sol
// @audit expensive op on line 120
121: if (_taikoToken == address(0)) revert INVALID_PARAM();
// @audit expensive op on line 120
124: if (_costToken == address(0)) revert INVALID_PARAM();
// @audit expensive op on line 120
127: if (_sharedVault == address(0)) revert INVALID_PARAM();
[G-21] Consider activating via-ir
for deploying
The IR-based code generator was developed to make code generation more performant by enabling optimization passes that can be applied across functions.
It is possible to activate the IR-based code generator through the command line by using the flag --via-ir
or by including the option {"viaIR": true}
.
Keep in mind that compiling with this option may take longer. However, you can simply test it before deploying your code. If you find that it provides better performance, you can add the --via-ir
flag to your deploy command.
[G-22] Function calls should be cached instead of re-calling the function
Consider caching the result instead of re-calling the function when possible. Note: this also includes casts, which cost between 42-46 gas, depending on the type.
There are 12 instances of this issue.
File: packages/protocol/contracts/L2/TaikoL2.sol
// @audit blockhash(parentId) is duplicated on line 157
154: l2Hashes[parentId] = blockhash(parentId);
[154]
File: packages/protocol/contracts/automata-attestation/lib/PEMCertChainLib.sol
// @audit der.nextSiblingOf(tbsPtr) is duplicated on line 111, 112, 127, 144, 157, 186
104: tbsPtr = der.nextSiblingOf(tbsPtr);
// @audit der.firstChildOf(tbsPtr) is duplicated on line 130, 147, 161, 193, 194
115: uint256 issuerPtr = der.firstChildOf(tbsPtr);
// @audit der.firstChildOf(issuerPtr) is duplicated on line 117
116: issuerPtr = der.firstChildOf(issuerPtr);
// @audit der.firstChildOf(subjectPtr) is duplicated on line 149
148: subjectPtr = der.firstChildOf(subjectPtr);
// @audit der.nextSiblingOf(sigPtr) is duplicated on line 176
167: sigPtr = der.nextSiblingOf(sigPtr);
// @audit _trimBytes(der.bytesAt(sigPtr), 32) is duplicated on line 177
174: bytes memory sigX = _trimBytes(der.bytesAt(sigPtr), 32);
// @audit der.bytesAt(sigPtr) is duplicated on line 177
174: bytes memory sigX = _trimBytes(der.bytesAt(sigPtr), 32);
// @audit der.nextSiblingOf(extnValueOidPtr) is duplicated on line 318
312: uint256 pceidPtr = der.nextSiblingOf(extnValueOidPtr);
// @audit bytes2(svnValueBytes) is duplicated on line 360
359: ? uint16(bytes2(svnValueBytes)) / 256
[104, 115, 116, 148, 167, 174, 174, 312, 359]
File: packages/protocol/contracts/thirdparty/optimism/rlp/RLPReader.sol
// @audit MemoryPointer.wrap(MemoryPointer.unwrap(_in.ptr) + offset) is duplicated on line 86
78: ptr: MemoryPointer.wrap(MemoryPointer.unwrap(_in.ptr) + offset)
// @audit MemoryPointer.unwrap(_in.ptr) is duplicated on line 86
78: ptr: MemoryPointer.wrap(MemoryPointer.unwrap(_in.ptr) + offset)
[G-23] Functions that revert when called by normal users can be payable
If a function modifier such as onlyOwner
is used, the function will revert if a normal user tries to pay the function.
Marking the function as payable
will lower the gas for legitimate callers, as the compiler will not include checks for whether a payment was provided.
The extra opcodes avoided are:
CALLVALUE(2), DUP1(3), ISZERO(3), PUSH2(3), JUMPI(10), PUSH1(3), DUP1(3), REVERT(0), JUMPDEST(1), POP(2)
which cost an average of about 21 gas per call to the function, in addition to the extra deployment cost.
There are 37 instances of this issue.
Expand findings
File: packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol
65: function setMrSigner(bytes32 _mrSigner, bool _trusted) external onlyOwner {
69: function setMrEnclave(bytes32 _mrEnclave, bool _trusted) external onlyOwner {
73: function addRevokedCertSerialNum(
74: uint256 index,
75: bytes[] calldata serialNumBatch
76: )
77: external
78: onlyOwner
88: function removeRevokedCertSerialNum(
89: uint256 index,
90: bytes[] calldata serialNumBatch
91: )
92: external
93: onlyOwner
103: function configureTcbInfoJson(
104: string calldata fmspc,
105: TCBInfoStruct.TCBInfo calldata tcbInfoInput
106: )
107: public
108: onlyOwner
114: function configureQeIdentityJson(EnclaveIdStruct.EnclaveId calldata qeIdentityInput)
115: external
116: onlyOwner
122: function toggleLocalReportCheck() external onlyOwner {
[65, 69, 73-78, 88-93, 103-108, 114-116, 122]
File: packages/protocol/contracts/bridge/Bridge.sol
82: function suspendMessages(
83: bytes32[] calldata _msgHashes,
84: bool _suspend
85: )
86: external
87: onlyFromOwnerOrNamed("bridge_watchdog")
101: function banAddress(
102: address _addr,
103: bool _ban
104: )
105: external
106: onlyFromOwnerOrNamed("bridge_watchdog")
107: nonReentrant
File: packages/protocol/contracts/common/AddressManager.sol
38: function setAddress(
39: uint64 _chainId,
40: bytes32 _name,
41: address _newAddress
42: )
43: external
44: virtual
45: onlyOwner
[38-45]
File: packages/protocol/contracts/common/AddressResolver.sol
58: function __AddressResolver_init(address _addressManager) internal virtual onlyInitializing {
[58]
File: packages/protocol/contracts/common/EssentialContract.sol
95: function __Essential_init(
96: address _owner,
97: address _addressManager
98: )
99: internal
100: virtual
101: onlyInitializing
114: function _authorizeUpgrade(address) internal virtual override onlyOwner { }
116: function _authorizePause(address) internal virtual onlyOwner { }
File: packages/protocol/contracts/L1/TaikoToken.sol
47: function burn(address _from, uint256 _amount) public onlyOwner {
52: function snapshot() public onlyFromOwnerOrNamed("snapshooter") {
File: packages/protocol/contracts/L2/CrossChainOwned.sol
60: function __CrossChainOwned_init(
61: address _owner,
62: address _addressManager,
63: uint64 _ownerChainId
64: )
65: internal
66: virtual
67: onlyInitializing
[60-67]
File: packages/protocol/contracts/L2/TaikoL2.sol
163: function withdraw(
164: address _token,
165: address _to
166: )
167: external
168: onlyFromOwnerOrNamed("withdrawer")
169: nonReentrant
170: whenNotPaused
[163-170]
File: packages/protocol/contracts/L2/TaikoL2EIP1559Configurable.sol
25: function setConfigAndExcess(
26: Config memory _newConfig,
27: uint64 _newGasExcess
28: )
29: external
30: virtual
31: onlyOwner
[25-31]
File: packages/protocol/contracts/signal/SignalService.sol
56: function authorize(address _addr, bool _authorize) external onlyOwner {
[56]
File: packages/protocol/contracts/team/TimelockTokenPool.sol
135: function grant(address _recipient, Grant memory _grant) external onlyOwner {
150: function void(address _recipient) external onlyOwner {
File: packages/protocol/contracts/tokenvault/BridgedERC1155.sol
66: function mint(
67: address _to,
68: uint256 _tokenId,
69: uint256 _amount
70: )
71: public
72: nonReentrant
73: whenNotPaused
74: onlyFromNamed("erc1155_vault")
83: function mintBatch(
84: address _to,
85: uint256[] memory _tokenIds,
86: uint256[] memory _amounts
87: )
88: public
89: nonReentrant
90: whenNotPaused
91: onlyFromNamed("erc1155_vault")
100: function burn(
101: address _account,
102: uint256 _tokenId,
103: uint256 _amount
104: )
105: public
106: nonReentrant
107: whenNotPaused
108: onlyFromNamed("erc1155_vault")
File: packages/protocol/contracts/tokenvault/BridgedERC20.sol
80: function setSnapshoter(address _snapshooter) external onlyOwner {
85: function snapshot() external onlyOwnerOrSnapshooter {
File: packages/protocol/contracts/tokenvault/BridgedERC20Base.sol
36: function changeMigrationStatus(
37: address _migratingAddress,
38: bool _migratingInbound
39: )
40: external
41: nonReentrant
42: whenNotPaused
43: onlyFromOwnerOrNamed("erc20_vault")
[36-43]
File: packages/protocol/contracts/tokenvault/BridgedERC721.sol
54: function mint(
55: address _account,
56: uint256 _tokenId
57: )
58: public
59: nonReentrant
60: whenNotPaused
61: onlyFromNamed("erc721_vault")
69: function burn(
70: address _account,
71: uint256 _tokenId
72: )
73: public
74: nonReentrant
75: whenNotPaused
76: onlyFromNamed("erc721_vault")
File: packages/protocol/contracts/tokenvault/ERC20Vault.sol
148: function changeBridgedToken(
149: CanonicalERC20 calldata _ctoken,
150: address _btokenNew
151: )
152: external
153: nonReentrant
154: whenNotPaused
155: onlyOwner
[148-155]
File: packages/protocol/contracts/verifiers/SgxVerifier.sol
90: function addInstances(address[] calldata _instances)
91: external
92: onlyOwner
100: function deleteInstances(uint256[] calldata _ids)
101: external
102: onlyFromOwnerOrNamed("rollup_watchdog")
139: function verifyProof(
140: Context calldata _ctx,
141: TaikoData.Transition calldata _tran,
142: TaikoData.TierProof calldata _proof
143: )
144: external
145: onlyFromNamed("taiko")
File: packages/protocol/contracts/L1/provers/Guardians.sol
53: function setGuardians(
54: address[] memory _newGuardians,
55: uint8 _minGuardians
56: )
57: external
58: onlyOwner
59: nonReentrant
[53-59]
File: packages/protocol/contracts/team/airdrop/MerkleClaimable.sol
45: function setConfig(
46: uint64 _claimStart,
47: uint64 _claimEnd,
48: bytes32 _merkleRoot
49: )
50: external
51: onlyOwner
56: function __MerkleClaimable_init(
57: uint64 _claimStart,
58: uint64 _claimEnd,
59: bytes32 _merkleRoot
60: )
61: internal
62: onlyInitializing
[G-24] Caching global variables is more expensive than using the actual variable
It’s better to not cache global variables, as their direct usage is cheaper (e.g. msg.sender
).
There is 1 instance of this issue.
File: packages/protocol/contracts/L1/hooks/AssignmentHook.sol
93: address taikoL1Address = msg.sender;
[93]
[G-25] Add unchecked
blocks for subtractions where the operands cannot underflow
There are some checks to avoid an underflow, so it’s safe to use unchecked
to have some gas savings.
There are 7 instances of this issue.
Expand findings
File: packages/protocol/contracts/L2/TaikoL2.sol
// @audit check on line 275
276: numL1Blocks = _l1BlockId - lastSyncedBlock;
[276]
File: packages/protocol/contracts/team/TimelockTokenPool.sol
// @audit check on line 257
264: return _amount * uint64(block.timestamp - _start) / _period;
[264]
File: packages/protocol/contracts/automata-attestation/lib/PEMCertChainLib.sol
// @audit check on line 262
265: uint256 lengthDiff = n - expectedLength;
[265]
File: packages/protocol/contracts/automata-attestation/utils/BytesUtils.sol
// @audit check on line 90
93: mask = ~(2 ** (8 * (32 - shortest + idx)) - 1);
[93]
File: packages/protocol/