Skip to content

Weber Light Client Sync Protocol

Notice: This document is a work-in-progress for researchers and implementers of the Weber protocol.

Table of contents

Introduction

The Weber Light Client Sync Protocol enables resource-constrained devices to maintain consensus with the Ethereum network without processing or storing the full blockchain. This document describes the enhanced light client protocol, which builds upon the existing Ethereum light client specification while adding Weber-specific optimizations for improved efficiency, security, and performance in constrained environments.

Light clients in Weber synchronize by tracking sync committees and verifying cryptographic proofs rather than processing full blocks. The protocol addresses specific challenges faced in resource-limited environments such as mobile devices, IoT systems, and browser-based applications.

Constants

Name Value Description
MAX_LIGHT_CLIENT_UPDATES 128 Maximum number of light client updates to request in a single query
MIN_SYNC_INTERVAL 6 Minimum interval (in seconds) between light client sync operations
MAX_VALID_LIGHT_CLIENT_UPDATES 32 Maximum number of non-finalized light client updates a node will store
SAFETY_THRESHOLD_PERCENTAGE 66 Percentage of sync committee signatures required for accepting an update
LIGHT_CLIENT_CHAIN_DEPTH 8192 Maximum epochs of history maintained by light clients
WEBER_PROOF_COMPRESSION_LEVEL 9 Compression level for Weber-optimized proofs (0-9)
SIGNATURE_BATCH_SIZE 16 Number of signatures to batch verify at once

Types

class LightClientState(Container):
    # Current sync committee
    current_sync_committee: SyncCommittee
    # Next sync committee
    next_sync_committee: SyncCommittee
    # Header for the most recent finalized checkpoint
    finalized_header: BeaconBlockHeader
    # Current best header observed (optimistic)
    optimistic_header: BeaconBlockHeader
    # Sync committee signing period of the last update
    current_sync_committee_period: uint64
    # Timestamp of most recent update
    last_update_timestamp: uint64
    # Reputation mapping for sync data providers
    provider_reputation: Dict[PeerId, uint32]
    # Resource usage metrics
    resource_metrics: ResourceMetrics

class LightClientUpdate(Container):
    # Header attested to by the sync committee
    attested_header: BeaconBlockHeader
    # Next sync committee if this is a sync committee period boundary
    next_sync_committee: SyncCommittee
    # Next sync committee branch
    next_sync_committee_branch: Vector[Bytes32, log2(NEXT_SYNC_COMMITTEE_INDEX)]
    # Finalized header at the time of update creation
    finalized_header: BeaconBlockHeader
    # Finality branch
    finality_branch: Vector[Bytes32, log2(FINALIZED_ROOT_INDEX)]
    # Sync aggregate for attested header
    sync_aggregate: SyncAggregate
    # Signature slot
    signature_slot: Slot

Protocol Flow

The Weber light client sync protocol defines the following primary operations:

  1. Bootstrap: Initial connection and state acquisition
  2. Update synchronization: Regular updates of the chain state
  3. Optimistic sync: Faster, partial updates for time-sensitive applications
  4. Checkpoint sync: Rapid sync to recent finalized checkpoints

Bootstrap Process

async def bootstrap_light_client(
    trusted_block_root: Root,
    peers: Sequence[Peer]
) -> LightClientState:
    """
    Bootstrap a light client from a trusted block root.
    """
    # Request bootstrap data from multiple peers
    bootstrap_responses = await gather_from_peers(
        peers,
        "light_client_bootstrap",
        trusted_block_root
    )

    # Validate responses
    for peer, bootstrap in bootstrap_responses:
        if verify_light_client_bootstrap(bootstrap, trusted_block_root):
            # Initialize state with bootstrap data
            state = initialize_light_client_state(bootstrap)
            return state

    raise BootstrapError("Failed to bootstrap from trusted block root")

Update Synchronization

async def sync_light_client(
    state: LightClientState,
    peers: Sequence[Peer]
) -> LightClientState:
    """
    Synchronize a light client state to the latest finalized checkpoint.
    """
    # Calculate current period from state
    current_period = compute_sync_committee_period(state.finalized_header.slot)

    # Request updates starting from the current period
    start_period = current_period
    count = LIGHT_CLIENT_CHAIN_DEPTH // SLOTS_PER_SYNC_COMMITTEE_PERIOD

    # Connect to multiple peers to obtain updates
    peer_responses = await gather_from_peers(
        peers,
        "light_client_updates_by_period",
        start_period,
        min(count, MAX_LIGHT_CLIENT_UPDATES)
    )

    # Validate and merge results based on provider reputation
    validated_updates = []
    for peer, updates in peer_responses:
        if is_trusted_peer(peer) or await validate_updates(updates):
            validated_updates.extend(updates)

    return validated_updates[:count]

Compressed Sync Data

Weber's sync data compression mechanism:

def compress_sync_data(data: bytes) -> bytes:
    """
    Compress sync data to reduce bandwidth usage
    """
    # Use differential encoding to reduce size of consecutive block headers
    if isinstance(data, List) and all(isinstance(x, BeaconBlockHeader) for x in data):
        return differential_encode_headers(data)

    # Use dictionary compression for repetitive hash values
    elif is_merkle_proof(data):
        return compress_merkle_proof(data)

    # Default to standard compression
    else:
        return standard_compression(data, level=WEBER_PROOF_COMPRESSION_LEVEL)

Security Considerations

Attack Mitigations

Weber sync protocol includes attack protections:

def is_likely_attack(updates: Sequence[WeberLightClientUpdate]) -> bool:
    """
    Detect potential sync attacks
    """
    # Check for inconsistent committee signatures
    signatures = [update.sync_aggregate.sync_committee_signature for update in updates]
    if has_conflicting_signatures(signatures):
        return True

    # Detect unusual fork patterns
    if detect_unusual_fork_pattern(updates):
        return True

    # Verify header consistency
    headers = [update.header for update in updates]
    if not verify_headers_consistency(headers):
        return True

    return False

Sync in Weak Network Conditions

Strategy for synchronizing in low-bandwidth environments:

async def low_bandwidth_sync(
    light_client_state: LightClientState,
    max_bandwidth_kbps: float
) -> LightClientState:
    """
    Sync strategy for low-bandwidth environments
    """
    # Estimate minimum required sync data
    min_data_size = estimate_min_sync_data_size(light_client_state)

    # Calculate maximum sync interval based on bandwidth constraints
    sync_interval = max(
        MIN_SYNC_INTERVAL,
        min_data_size / (max_bandwidth_kbps * 1024 / 8)
    )

    # Prioritize critical updates only
    critical_updates = await request_critical_updates_only(light_client_state)

    # Apply critical updates
    for update in critical_updates:
        process_light_client_update(light_client_state, update)

    return light_client_state

Performance Optimizations

Adaptive Sync Strategy

Weber's adaptive sync strategy:

class AdaptiveSyncStrategy:
    """
    Adaptively adjust sync strategy based on network conditions and device capabilities
    """
    def __init__(self):
        self.bandwidth_history = []
        self.cpu_utilization_history = []
        self.memory_usage_history = []

    def update_metrics(self, bandwidth: float, cpu: float, memory: float):
        """Update resource metrics history"""
        self.bandwidth_history.append(bandwidth)
        self.cpu_utilization_history.append(cpu)
        self.memory_usage_history.append(memory)

        # Keep history at reasonable size
        if len(self.bandwidth_history) > 100:
            self.bandwidth_history.pop(0)
        if len(self.cpu_utilization_history) > 100:
            self.cpu_utilization_history.pop(0)
        if len(self.memory_usage_history) > 100:
            self.memory_usage_history.pop(0)

    def get_optimal_sync_parameters(self) -> SyncParameters:
        """Calculate optimal sync parameters"""
        avg_bandwidth = sum(self.bandwidth_history) / len(self.bandwidth_history)
        avg_cpu = sum(self.cpu_utilization_history) / len(self.cpu_utilization_history)
        avg_memory = sum(self.memory_usage_history) / len(self.memory_usage_history)

        # Adjust parameters based on available resources
        if avg_bandwidth < LOW_BANDWIDTH_THRESHOLD:
            # Low bandwidth mode
            return SyncParameters(
                update_frequency=SLOW_UPDATE_FREQUENCY,
                compression_level=HIGH_COMPRESSION_LEVEL,
                max_updates_per_request=MINIMAL_UPDATES_PER_REQUEST
            )
        elif avg_cpu > HIGH_CPU_THRESHOLD or avg_memory > HIGH_MEMORY_THRESHOLD:
            # Resource-constrained mode
            return SyncParameters(
                update_frequency=MEDIUM_UPDATE_FREQUENCY,
                compression_level=MEDIUM_COMPRESSION_LEVEL,
                max_updates_per_request=REDUCED_UPDATES_PER_REQUEST
            )
        else:
            # Normal mode
            return SyncParameters(
                update_frequency=FAST_UPDATE_FREQUENCY,
                compression_level=LOW_COMPRESSION_LEVEL,
                max_updates_per_request=STANDARD_UPDATES_PER_REQUEST
            )

Resource Limitations

Weber defines sync resource limits for different device types:

def get_device_sync_limits(device_type: str) -> ResourceLimits:
    """
    Get sync resource limits based on device type
    """
    if device_type == "mobile":
        return ResourceLimits(
            max_memory_mb=200,
            max_storage_mb=500,
            max_bandwidth_kbps=50,
            max_cpu_percentage=10
        )
    elif device_type == "iot":
        return ResourceLimits(
            max_memory_mb=50,
            max_storage_mb=100,
            max_bandwidth_kbps=10,
            max_cpu_percentage=5
        )
    elif device_type == "desktop":
        return ResourceLimits(
            max_memory_mb=1000,
            max_storage_mb=5000,
            max_bandwidth_kbps=500,
            max_cpu_percentage=25
        )
    else:  # server
        return ResourceLimits(
            max_memory_mb=4000,
            max_storage_mb=20000,
            max_bandwidth_kbps=1000,
            max_cpu_percentage=50
        )

Implementation Guidelines

When implementing the Weber sync protocol, consider:

  1. Progressive implementation: Start with basic sync functionality, then add Weber enhancements
  2. Compatibility checks: Ensure compatibility with existing Ethereum light clients
  3. Recovery mechanisms: Implement robust sync recovery processes
  4. Resource monitoring: Add resource usage monitoring and adaptive adjustment
  5. Validation prioritization: Prioritize validators based on reputation scores
  6. Test scenarios: Test sync behavior under various network conditions
  7. Security auditing: Review sync mechanisms for potential security risks

A proper implementation of the Weber sync protocol can significantly improve light client efficiency and reliability while reducing resource requirements, making blockchain technology more accessible in resource-constrained environments.