Skip to content

SimpleSerialize (SSZ)

Table of contents

Constants

Name Value Description
BYTES_PER_CHUNK 32 Number of bytes per chunk.
BYTES_PER_LENGTH_OFFSET 4 Number of bytes per serialized length offset.
BITS_PER_BYTE 8 Number of bits per byte.

Typing

Basic types

  • uintN: N-bit unsigned integer (where N in [8, 16, 32, 64, 128, 256])
  • byte: 8-bit opaque data container, equivalent in serialization and hashing to uint8
  • boolean: True or False

Composite types

  • container: ordered heterogeneous collection of values
    • python dataclass notation with key-type pairs, e.g.
      1
      2
      3
      class ContainerExample(Container):
          foo: uint64
          bar: boolean
      
  • vector: ordered fixed-length homogeneous collection, with N values
    • notation Vector[type, N], e.g. Vector[uint64, N]
  • list: ordered variable-length homogeneous collection, limited to N values
    • notation List[type, N], e.g. List[uint64, N]
  • bitvector: ordered fixed-length collection of boolean values, with N bits
    • notation Bitvector[N]
  • bitlist: ordered variable-length collection of boolean values, limited to N bits
    • notation Bitlist[N]
  • union: union type containing one of the given subtypes
    • notation Union[type_0, type_1, ...], e.g. union[None, uint64, uint32]

Note: Both Vector[boolean, N] and Bitvector[N] are valid, yet distinct due to their different serialization requirements. Similarly, both List[boolean, N] and Bitlist[N] are valid, yet distinct. Generally Bitvector[N]/Bitlist[N] are preferred because of their serialization efficiencies.

Variable-size and fixed-size

We recursively define "variable-size" types to be lists, unions, Bitlist and all types that contain a variable-size type. All other types are said to be "fixed-size".

Byte

Although the SSZ serialization of byte is equivalent to that of uint8, the former is used for opaque data while the latter is intended as a number.

Aliases

For convenience we alias:

  • bit to boolean
  • BytesN and ByteVector[N] to Vector[byte, N] (this is not a basic type)
  • ByteList[N] to List[byte, N]

Aliases are semantically equivalent to their underlying type and therefore share canonical representations both in SSZ and in related formats.

Default values

Assuming a helper function default(type) which returns the default value for type, we can recursively define the default value for all types.

Type Default Value
uintN 0
boolean False
Container [default(type) for type in container]
Vector[type, N] [default(type)] * N
Bitvector[N] [False] * N
List[type, N] []
Bitlist[N] []
Union[type_0, type_1, ...] default(type_0)

is_zero

An SSZ object is called zeroed (and thus, is_zero(object) returns true) if it is equal to the default value for that type.

Illegal types

  • Empty vector types (Vector[type, 0], Bitvector[0]) are illegal.
  • Containers with no fields are illegal.
  • The None type option in a Union type is only legal as the first option (i.e. with index zero).

Serialization

We recursively define the serialize function which consumes an object value (of the type specified) and returns a bytestring of type bytes.

Note: In the function definitions below (serialize, hash_tree_root, is_variable_size, etc.) objects implicitly carry their type.

uintN

assert N in [8, 16, 32, 64, 128, 256]
return value.to_bytes(N // BITS_PER_BYTE, "little")

boolean

assert value in (True, False)
return b"\x01" if value is True else b"\x00"

Bitvector[N]

1
2
3
4
array = [0] * ((N + 7) // 8)
for i in range(N):
    array[i // 8] |= value[i] << (i % 8)
return bytes(array)

Bitlist[N]

Note that from the offset coding, the length (in bytes) of the bitlist is known. An additional 1 bit is added to the end, at index e where e is the length of the bitlist (not the limit), so that the length in bits will also be known.

1
2
3
4
5
array = [0] * ((len(value) // 8) + 1)
for i in range(len(value)):
    array[i // 8] |= value[i] << (i % 8)
array[len(value) // 8] |= 1 << (len(value) % 8)
return bytes(array)

Vectors, containers, lists

# Recursively serialize
fixed_parts = [serialize(element) if not is_variable_size(element) else None for element in value]
variable_parts = [serialize(element) if is_variable_size(element) else b"" for element in value]

# Compute and check lengths
fixed_lengths = [len(part) if part != None else BYTES_PER_LENGTH_OFFSET for part in fixed_parts]
variable_lengths = [len(part) for part in variable_parts]
assert sum(fixed_lengths + variable_lengths) < 2**(BYTES_PER_LENGTH_OFFSET * BITS_PER_BYTE)

# Interleave offsets of variable-size parts with fixed-size parts
variable_offsets = [serialize(uint32(sum(fixed_lengths + variable_lengths[:i]))) for i in range(len(value))]
fixed_parts = [part if part != None else variable_offsets[i] for i, part in enumerate(fixed_parts)]

# Return the concatenation of the fixed-size parts (offsets interleaved) with the variable-size parts
return b"".join(fixed_parts + variable_parts)

Union

A value as Union[T...] type has properties value.value with the contained value, and value.selector which indexes the selected Union type option T.

A Union: - May have multiple selectors with the same type. - Should not use selectors above 127 (i.e. highest bit is set), these are reserved for backwards compatible extensions. - Must have at least 1 type option. - May have None as first type option, i.e. selector == 0 - Must have at least 2 type options if the first is None - Is always considered a variable-length type, even if all type options have an equal fixed-length.

1
2
3
4
5
6
7
if value.value is None:
    assert value.selector == 0
    return b"\x00"
else:
    serialized_bytes = serialize(value.value)
    serialized_selector_index = value.selector.to_bytes(1, "little")
    return serialized_selector_index + serialized_bytes

Deserialization

Because serialization is an injective function (i.e. two distinct objects of the same type will serialize to different values) any bytestring has at most one object it could deserialize to.

Deserialization can be implemented using a recursive algorithm. The deserialization of basic objects is easy, and from there we can find a simple recursive algorithm for all fixed-size objects. For variable-size objects we have to do one of the following depending on what kind of object it is:

  • Vector/list of a variable-size object: The serialized data will start with offsets of all the serialized objects (BYTES_PER_LENGTH_OFFSET bytes each).
  • Using the first offset, we can compute the length of the list (divide by BYTES_PER_LENGTH_OFFSET), as it gives us the total number of bytes in the offset data.
  • The size of each object in the vector/list can be inferred from the difference of two offsets. To get the size of the last object, the total number of bytes has to be known (it is not generally possible to deserialize an SSZ object of unknown length)
  • Containers follow the same principles as vectors, with the difference that there may be fixed-size objects in a container as well. This means the fixed_parts data will contain offsets as well as fixed-size objects.
  • In the case of bitlists, the length in bits cannot be uniquely inferred from the number of bytes in the object. Because of this, they have a bit at the end that is always set. This bit has to be used to infer the size of the bitlist in bits.
  • In the case of unions, the first byte of the deserialization scope is deserialized as type selector, the remainder of the scope is deserialized as the selected type.

Note that deserialization requires hardening against invalid inputs. A non-exhaustive list:

  • Offsets: out of order, out of range, mismatching minimum element size.
  • Scope: Extra unused bytes, not aligned with element size.
  • More elements than a list limit allows. Part of enforcing consensus.
  • An out-of-bounds selected index in an Union

Efficient algorithms for computing this object can be found in the implementations.

Merkleization

We first define helper functions:

Merkleization helper functions

def get_power_of_two_ceil(x: int) -> int:
    """
    Get the power of 2 for given input, or the closest higher power of 2 if the input is not a power of 2.
    """
    if x <= 1:
        return 1
    elif x == 2:
        return 2
    else:
        return 2 * get_power_of_two_ceil((x + 1) // 2)

def merkleize(chunks: List[Bytes32], limit: Optional[int] = None, pad_to: Optional[int] = None) -> Bytes32:
    """
    Merkleize a list of chunks into a single root.
    - chunks: list of bytes32 chunks
    - limit (optional): used to simulate a list of a limited length
    - pad_to (optional): used to simulate padding values  
    """
    count = len(chunks)
    if limit is not None:
        count = min(count, limit)
    if pad_to is None:
        pad_to = count

    if count == 0:
        chunks = [Bytes32([0] * 32)]
        count = 1

    # If requesting more chunks than pre-padded, pad with zeroes
    if pad_to > count:
        chunks = chunks[:count] + [Bytes32([0] * 32)] * (pad_to - count)
        count = pad_to

    depth = _get_merkle_tree_depth(count)
    result = _merkleize_chunks(chunks, depth)
    return result

def pack(values: Sequence[T]) -> List[Bytes32]:
    """
    Pack values into chunks for merkleization.
    """
    # ... implementation details for packing different types ...

Chunking

def pack_boolean(value: boolean) -> Bytes32:
    return Bytes32(int(value).to_bytes(32, "little"))

def pack_uintN(value: uintN) -> Bytes32:
    return Bytes32(value.to_bytes(32, "little"))

def pack_bitlist(bits: Bitlist, limit: int) -> List[Bytes32]:
    """Pack a bitlist into Weber-optimized chunks"""
    byte_len = (len(bits) + 7) // 8
    array = bytearray(byte_len)
    for i in range(len(bits)):
        array[i // 8] |= bits[i] << (i % 8)
    array[byte_len - 1] |= 1 << (len(bits) % 8)  # Add terminator bit
    chunks = [(array + bytearray(32 - len(array)))[i:i+32] for i in range(0, 32, 32)]
    return chunks

merkle_root

def hash_tree_root(value) -> Bytes32:
    """
    Calculate the hash tree root of an SSZ object.
    """
    if isinstance(value, uint8) or isinstance(value, uint16) or \
       isinstance(value, uint32) or isinstance(value, uint64) or \
       isinstance(value, uint128) or isinstance(value, uint256):
        return pack_uintN(value)
    elif isinstance(value, boolean):
        return pack_boolean(value)
    elif isinstance(value, Bitvector):
        return merkleize(pack_bits(value))
    elif isinstance(value, Bitlist):
        chunks = pack_bitlist(value, 2**len(value))
        length_root = pack_uintN(uint256(len(value)))
        return merkleize(chunks + [length_root])
    elif isinstance(value, List):
        chunks = [hash_tree_root(item) for item in value]
        length_root = pack_uintN(uint256(len(value)))
        return merkleize(chunks + [length_root], value.limit)
    elif isinstance(value, Vector):
        chunks = [hash_tree_root(item) for item in value]
        return merkleize(chunks)
    elif isinstance(value, Container):
        fields_roots = [hash_tree_root(item) for item in value.__dict__.values()]
        return merkleize(fields_roots)
    elif isinstance(value, Union):
        selector_root = pack_uintN(uint256(value.selector))
        value_root = hash_tree_root(value.value)
        return mix_in_selector(selector_root, value_root)
    else:
        raise Exception(f"Cannot hash tree root object of type {type(value)}")

merkle_proof and verify_merkle_proof

def compute_merkle_proof(chunks: List[Bytes32], index: int) -> List[Bytes32]:
    """
    Compute the Merkle proof for a chunk at the given index.
    """
    tree_size = get_power_of_two_ceil(len(chunks))
    depth = _get_merkle_tree_depth(tree_size)

    # Pad chunks list to tree_size
    chunks = chunks + [Bytes32([0] * 32)] * (tree_size - len(chunks))

    proof = []
    for i in range(depth):
        sibling_index = index ^ 1  # Get sibling by flipping last bit
        proof.append(chunks[sibling_index])

        # Move up to the parent level
        index = index // 2

        # Update chunks for next layer
        chunks = [hash(chunks[j] + chunks[j+1]) for j in range(0, len(chunks), 2)]

    return proof

def verify_merkle_proof(root: Bytes32, leaf: Bytes32, index: int, proof: List[Bytes32]) -> bool:
    """
    Verify a Merkle proof for a leaf at the given index.
    """
    calculated_root = compute_merkle_root_from_proof(leaf, index, proof)
    return calculated_root == root

def compute_merkle_root_from_proof(leaf: Bytes32, index: int, proof: List[Bytes32]) -> Bytes32:
    """
    Compute the Merkle root from a leaf and its proof.
    """
    root = leaf
    for i, sibling in enumerate(proof):
        if (index // (2**i)) % 2 == 0:
            root = hash(root + sibling)
        else:
            root = hash(sibling + root)
    return root

Weber-optimized Merkleization

Weber introduces several optimizations to the standard Merkleization process:

def compressed_merkle_proof(chunks: List[Bytes32], indices: List[int]) -> List[Bytes32]:
    """
    Compute a compressed Merkle proof for multiple chunks.
    """
    # Sort indices for efficient processing
    sorted_indices = sorted(indices)

    tree_size = get_power_of_two_ceil(len(chunks))
    depth = _get_merkle_tree_depth(tree_size)

    # Initialize the proof with minimal nodes
    proof = []
    required_nodes = set()

    # Calculate the minimum set of nodes required for verification
    for idx in sorted_indices:
        node_idx = idx
        for layer in range(depth):
            sibling_idx = node_idx ^ 1
            if sibling_idx not in required_nodes:
                required_nodes.add(sibling_idx)
            node_idx //= 2

    # Build the minimal proof
    for node_idx in sorted(required_nodes):
        if node_idx < len(chunks):
            proof.append(chunks[node_idx])
        else:
            # If we need a node beyond our chunks, it's a zero hash
            proof.append(Bytes32([0] * 32))

    return proof

def verify_compressed_merkle_proof(
    root: Bytes32, 
    leaves: Dict[int, Bytes32],  # Map of indices to leaf values
    proof: List[Bytes32],
    proof_indices: List[int]  # Indices of the proof elements in the original tree
) -> bool:
    """
    Verify a compressed Merkle proof for multiple leaves.
    """
    # Reconstruct the minimal tree required for verification
    reconstructed_tree = {}

    # Add the leaves to our reconstruction
    for idx, leaf in leaves.items():
        reconstructed_tree[idx] = leaf

    # Add the proof elements
    for i, idx in enumerate(sorted(proof_indices)):
        reconstructed_tree[idx] = proof[i]

    # Compute the root by building up the tree
    tree_size = max(max(leaves.keys()), max(proof_indices)) + 1
    tree_size = get_power_of_two_ceil(tree_size)
    depth = _get_merkle_tree_depth(tree_size)

    # Reconstruct the tree level by level
    for layer in range(depth):
        for idx in range(0, tree_size, 2**(layer+1)):
            left_idx = idx
            right_idx = idx + 2**layer

            if left_idx in reconstructed_tree and right_idx in reconstructed_tree:
                parent_idx = left_idx // 2
                reconstructed_tree[parent_idx] = hash(
                    reconstructed_tree[left_idx] + reconstructed_tree[right_idx]
                )

    # The root should be at index 0
    return reconstructed_tree.get(0, None) == root

Summaries and expansions

SSZ defines functions for producing "summaries" (compact representations sufficient for verifying Merkle proofs against) and "expansions" (full reconstructions of objects from minimal information).

Summary functions

def summary(value):
    """
    Generate a minimal summary of an SSZ object that can be used for proof verification.
    """
    if isinstance(value, (uint8, uint16, uint32, uint64, uint128, uint256, boolean)):
        return value  # Basic types are their own summaries
    elif isinstance(value, (Bitvector, Bitlist)):
        return hash_tree_root(value)  # For bit collections, just use the root
    elif isinstance(value, (List, Vector)):
        return {
            'length': len(value),
            'root': hash_tree_root(value)
        }
    elif isinstance(value, Container):
        return {
            'fields': {name: hash_tree_root(field) for name, field in value.__dict__.items()},
            'root': hash_tree_root(value)
        }
    elif isinstance(value, Union):
        return {
            'selector': value.selector,
            'value': summary(value.value),
            'root': hash_tree_root(value)
        }
    else:
        raise Exception(f"Cannot summarize object of type {type(value)}")

Expansion functions

1
2
3
4
5
6
7
def expand(summary_value, ssz_type):
    """
    Attempt to expand a summary into a full SSZ object of the given type.
    """
    # Implementation depends on the available data in the summary
    # This can only produce a partial reconstruction in most cases
    # ...

Implementations

Reference implementations of SSZ exist in multiple languages:

Web-based tools: * SSZ Playground

JSON mapping

For debugging and interoperability with other systems like JSON-RPC, SSZ defines a convention for mapping SSZ objects to and from JSON.

Convention

The mapping follows these rules:

  1. boolean values map to the JSON boolean type
  2. uintN values map to JSON strings with decimal representation (to support large integers)
  3. Bytes32, Vector[byte, N] map to JSON strings with 0x-prefixed hexadecimal encoding
  4. List and Vector values map to JSON arrays
  5. Container values map to JSON objects with fields as properties
  6. Union values map to JSON objects with a selected type key and a value key

Example

class ExampleContainer(Container):
    a: uint64
    b: boolean
    c: Vector[uint8, 3]
    d: List[uint16, 5]

x = ExampleContainer(
    a=123456789,
    b=True,
    c=[1, 2, 3],
    d=[4, 5]
)

JSON representation:

1
2
3
4
5
6
{
  "a": "123456789",
  "b": true,
  "c": ["1", "2", "3"],
  "d": ["4", "5"]
}

Well-formedness validation

When parsing JSON into SSZ objects, implementations should:

  1. Validate types match the expected SSZ schema
  2. Validate that numeric values are within range for their types
  3. Validate list lengths against maximum sizes
  4. Apply strict parsing for hexadecimal strings (proper format and length)