Othentic
  • Introduction
    • Introducing Othentic Stack
    • Use Cases
  • AVS Framework
    • Abstract
    • Quick Start
    • Othentic CLI
      • Key Management
      • Contracts Deployment
      • Operator Registration
      • AVS Logic Implementation
      • Operator Deposit
      • Node Operators
      • Rewards Distribution
      • P2P Config
        • Custom P2P Messaging
        • P2P Auth Layer
        • Metrics and Monitoring
        • Logging
        • Persistent storage
        • Latency
      • CLI Command Reference
    • Smart Contracts
      • AVS Governance
      • Attestation Center
      • Hooks
        • Task Logic
        • Operator Management
        • Rewards Fee Calculator
      • OBLS
      • Othentic Registry
      • Message Handlers
    • Othentic Consensus
      • Abstract
      • Task & Task Definitions
      • Leader Election
      • Proof of Task
      • Execution Service
      • Validation Service
      • Voting Power
      • Rewards and Penalties
      • Internal Tasks
    • FAQ
    • Supported Networks
    • Explainers
      • Networking
      • Multichain
      • Production Guidelines
      • Operator Allowlisting
      • Governance Multisig
  • External
    • AVS Examples
  • GitHub
  • Othentic Hub
Powered by GitBook
On this page
  • Overview
  • Performers
  • Attesters
  • Running the Attester Nodes
  • Features
  • Aggregators
  • Running the Aggregator
  • Features
  • Bootstrap nodes
  • Running Bootstrap Nodes
  • Retrieve BOOTSTRAP_NODE_ID
  • Generating Bootstrap Seed and Id
  1. AVS Framework
  2. Othentic CLI

Node Operators

AVS network is composed of many nodes working simultaneously on validating tasks published to the network.

PreviousOperator DepositNextRewards Distribution

Last updated 1 month ago

Overview

There are four types of nodes in the network:

  • Performers

  • Attesters

  • Aggregators

  • Bootstrap Nodes

The Othentic Stack enables the deployment of a whole network, including bootstrapped peer discovery, which allows nodes to find each other over the network.


Performers

The Task Performer is an AVS Operator responsible for executing tasks through the , generating a , and sending the results to Attesters. Upon successful task execution, the Task Performer broadcasts an event via peer-to-peer networking, enabling Attester nodes to discover the task results.

The proofOfTask is submitted to the peer-to-peer network using the RPC call.


Attesters

Task Attesters are AVS Operators' quorum that attests to the validity of the executed task. Each task must be attested as either "valid" or "invalid".

The Operator's voting power is proportional and calculated against the amount of re-stake assets staked on the shared security layer, referred to as “dynamic voting power.” The re-staked effective balance determines each Operator's influence in the consensus process. If over ⅔ of the quorum's voting power attest "valid", the task is considered approved. If over ⅓ of the quorum's voting power attest "invalid", the task is rejected, and the quorum executes a slashing event to the Performer. The Attesters run the validation logic using a local HTTP request to the

Running the Attester Nodes

To run the Attester node with the necessary configurations, use the following command:

othentic-cli node attester \
    /ip4/127.0.0.1/tcp/9876/p2p/<BOOTSTRAP_NODE_ID> \
    --avs-webapi <HOST> \
    --avs-webapi-port <PORT>

BOOTSTRAP_NODE_ID is saved in the .env file as per the Bootstrap node configuration.

In case of multiple Aggregator Nodes, here's how Attesters can connect with all of them.

command: [
    "node",
    "attester",
    "/ip4/{Aggregator_IP1}/tcp/9876/p2p/${OTHENTIC_BOOTSTRAP_ID}, 
    /ip4/{Aggregator_IP2}/tcp/9876/p2p/${OTHENTIC_BOOTSTRAP_ID},
    /ip4/{Aggregator_IP3}/tcp/9876/p2p/${OTHENTIC_BOOTSTRAP_ID}",
    --avs-webapi <HOST>,
    --l1-chain holesky,
    --l2-chain amoy,base-sepolia,
]

Features

  • Operator Status Throttling: A throttling mechanism has been added to operator status checks to reduce idle RPC usage. Configure this using the optional --status-check-interval <interval> flag, with a default of 5000 milliseconds.

    othentic-cli node attester --status-check-interval 8000
  • The announce option in libp2p allows nodes to explicitly define which addresses they advertise to peers, overriding any automatically detected addresses. This is particularly useful when running behind a load balancer or NAT, where the external address needs to be manually set. To enable this option in our CLI, users should start their nodes with the --announced-addresses <multi-address> flag, specifying the desired multiaddr. This ensures that peers connect using the correct external address rather than any internally detected ones.

    othentic-cli node attester --announced-addresses /dnsaddr/{dns-name}/tcp/{port1}/p2p/{Peer ID},/ip4/{resolved dns ip}/tcp/{port1}/p2p/{Peer ID}

To retrieve your Peer Id, run the following command:

othentic-cli node get-id --node-type attester

Aggregators

The Aggregator listens to events from the Attester nodes and monitors the necessary voting power contribution to a certain task. The Aggregator aggregates the signatures of the Attesters into a BLS aggregated signature and submits a transaction to the AttestationCenter smart contract. After successful validation, the Performer, Attesters, and Aggregator are eligible to claim task rewards.

Running the Aggregator

To run the Aggregator node with the necessary configurations, use the following command:

othentic-cli node aggregator \
  --json-rpc \
  --l1-chain holesky \
  --l2-chain amoy,base-sepolia \
  --internal-tasks \
  --metrics \
  --delay 1500

Features

  • Transaction Simulation Flag: The --aggregator.simulate-transactions flag allows transactions to be simulated before task submission.

    othentic-cli node aggregator --aggregator.simulate-transactions true
  • Delay Submission: The --delay <time> flag enables you to specify an additional waiting period after achieving 2/3 of the voting power, allowing more attestations to arrive before submitting the transaction. The aggregator will wait for the specified delay before submitting the transaction on-chain.<time> can be specified as a number in milliseconds.

When operating multiple Aggregator nodes, transaction simulation and delayed submission help mitigate race conditions. If two aggregators submit the same task, the first transaction accepted on-chain will succeed, while the second will fail.

  • Health Check: When running the node with the --json-rpc flag, the RPC server starts and provides a /healthcheck endpoint to verify its status. For example:

    curl -X GET http://localhost:8545/healthcheck

Currently, the Aggregator node also acts as your bootstrap node. This might change in the future.


Bootstrap nodes

A bootstrap node serves as the initial point of contact for peers in the network. When a new peer joins, it first connects to a bootstrap node to discover other peers. Multiple bootstrap nodes can be run, and the Aggregator node also functions as a bootstrap node.

Bootstrap nodes (or bootnodes) are essential for new participants who wish to discover peers on the network. At least one bootstrap node with high availability is necessary to ensure that new nodes or operators can join the network. While anyone can operate a bootstrap node, it is recommended to maintain at least one with high availability.

We recommend using the Aggregator node as a bootstrap node for optimal performance (see Node Operators).

The bootstrap node is a crucial part of your P2P network. It should have high availability so new nodes can join the network. If the bootstrap node is down, new nodes cannot join the network (existing nodes continue to work).

Running Bootstrap Nodes

To configure a bootstrap node, copy the .env.example file to .env and populate it with the necessary values.

The .env.example file includes a pre-configured bootstrap seed and ID. While these can be used to get started, it is recommended to generate a new bootstrap seed and ID (see steps below).

Retrieve BOOTSTRAP_NODE_ID

To retrieve your BOOTSTRAP_NODE_ID, run the following command:

othentic-cli node get-id

Generating Bootstrap Seed and Id

To initialize a bootstrap node, you need to generate a bootstrap seed for its encrypted transport. The bootstrap seed is a random 32-byte sequence.

Step 1: Generate Bootstrap Seed

To generate a new bootstrap seed, run the following command using openssl:

openssl rand -hex 32

Step 2: Insert Seed into .env File

If you want to automatically add the generated seed to your .env file, use the following command:

echo "OTHENTIC_BOOTSTRAP_SEED=$(openssl rand -hex 32)" >> .env

Step 3: Generate Node ID

After adding the seed to your .env file, generate the bootstrap node ID using the following command:

yarn bootstrap-id

// You should see output similar to this:
Your node ID is:
12D3KooWBNFG1QjuF3UKAKvqhdXcxh9iBmj88cM5eU2EK5Pa91KB
✨  Done in 3.72s.

Step 4: Save Node ID

Copy the generated node ID and add it to your .env file:

OTHENTIC_BOOTSTRAP_ID=12D3KooWBNFG1QjuF3UKAKvqhdXcxh9iBmj88cM5eU2EK5Pa91KB

: The --metrics option allows for the configuration of metrics to monitor the Attester node’s performance.

: The --p2p.datadir flag specifies the directory for storing peerStore data, ensuring persistence across restarts.

: Customize peer-to-peer messaging behavior using specific configuration options for your network, including the sendCustomMessage RPC method and othentic.p2p.custom_message topic.

: By enabling the --internal-tasks flag, internal tasks are executed on the Aggregator nodes.

: The --metrics option allows for the configuration of metrics to monitor the Aggregator node’s performance.

: The --p2p.datadir flag specifies the directory for storing peerStore data, ensuring persistence across restarts.

: Customize peer-to-peer messaging behavior using specific configuration options for your network, including the sendCustomMessage RPC method and othentic.p2p.custom_message topic.

: With multiple logging destinations (console, file, or Elasticsearch) and severity levels, Othentic CLI offers detailed logging for easier debugging, real-time monitoring, and historical analysis.

This endpoint should return OK if the node is functioning properly, indicating the RPC service is active. For more detailed monitoring, use or .

Metrics
Persistent Storage
Custom p2p messaging
Internal Tasks
Metrics
Persistent Storage
Custom p2p messaging
Logging
Metrics
Logging
Execution Service
Proof of Task
Validation Service
.
sendTask