Rewards and Penalties
Last updated
Last updated
Operators in a shared security environment are incentivized through rewards for making attestation and executing diverse specialized tasks. Rewards scale with the effective balance, which provides an economic incentive for Operators to behave honestly and act in the networks’ best interest.
To maximize reward, a single Operator opt-in to validate multiple networks, and puts its staked ETH at additional risk of penalties. Operators are penalized for missed, late or incorrect attestations. Violating a consensus rule of chain A, carries consequences on effective balance, which results in a lower reward and voting power on AVS B and C, alongside A.
Operator rewards depend both on the amount of base reward per task and the frequency at which an operator is chosen to perform a task. AVS developers must configure the task rewards and utilize a leader election mechanism to determine the frequency at which operators are selected for tasks. Othentic Stack allows the configuration of diverse types of tasks with different corresponding rewards.
Stake-weighted Rewards
Most Proof-of-Stake networks include a "stake-weighted" Leader Election mechanism where the more stake an operator has the more tasks they'll be chosen to perform. The Othentic Stack supports any Leader Election mechanism implemented by the AVS developer.
When using a stake-weighted algorithm, for operator , we denote the total reward , with the expected value described as:
Where is the reward per task, is the effective balance of operator , is the total effective stake for the network and is the number of tasks the network produced overall.
There are two configurable components for the rewards function:
Task Definitions are used to configure base rewards for the operators. Each task definition includes the base reward for all entities participating in the consensus: the performer, the attesters, and the aggregator. You can configure task definition based on Creating Task Definition section in the Task Page.
To generalize for any Leader Election algorithm, a probability function can be derived from the selected algorithm such that to rewrite the expected value formula as:
For example, a uniformly random leader election mechanism — which is not stake-weighted — would have .
— The probability of operator to be elected for a task
— The base reward for each task
is derived from your Leader Election algorithm. Different algorithms could have different functions. is the base reward and is configurable via the AttestationCenter
contract.