Introduction

Welcome to the Polka Storage project book. This document is a work in progress and will be constantly updated.

This project aims to build a native storage network for Polkadot.

We've now completed Phase 2 and have started work on Phase 3.

During Phase 2, we have implemented:

We also present a complete real-world scenario in which a Storage Provider and a Storage User negotiate a deal, perform all the steps necessary to start the storage and then receive rewards (or punishments) for making it happen.

The Polka Storage project currently provides:

Dedicated CLIs

Pallets:

Polka Storage Client Upload

During Phase 1, we implemented the following:

We present a demo on how to store a file, where a Storage Provider and a Storage User negotiate a deal and perform all the steps necessary to start the file storage. We cover the details behind proving a file in a separate demo.

More information available about the project's genesis in:


Eiger Oy

Architecture Overview

The Polka Storage parachain is, just like other parachains, composed of collators that receive extrinsics calls, and through them perform state transitions.

System Overview

From left to right, we have validators (represented by a single node as only one validates blocks at a time), collators and the storage providers.

The validators are handled by Polkadot itself — i.e. who gets to check the validity proofs is randomly selected by the network.

The collators execute our parachain runtime and process extrinsic calls from the storage providers — such as proof of storage submissions.

The storage providers are independent of the collators and are controlled by people like you, who provide storage to the system. Storage management is left to the storage providers, being responsible to keep their physical system in good shape to serve clients. We do provide an implementation of the storage provider, you can read more about it in the Polka Storage Provider Server chapter.

Pallets Overview

We've been focusing on implementing the core functionality by developing the market, storage provider, proofs and randomness pallets.

The market pallet handles all things related to deal payments and slashing, being informed by the storage provider when deals haven't been proven and applying slashing in those cases. The storage provider handles the registering of storage providers and the proof submission, the latter is checked inside the collator's WASM runtime, using the proofs pallet. Finally, the proofs pallet makes use of randomness for challenges, ensuring the storage providers can't cheat the system.

For a deeper dive on the pallets, you can read the Pallets chapter.

Resources on Parachains

Reading:

Videos:

Polka Storage Provider — Server Architecture

The server has two main fronts, the JSON-RPC API which provides an interface for users to submit deal proposals to the storage provider, and the HTTP API which consists of a single endpoint where users are to submit their data — as illustrated below.

The user is first required to propose a deal, which once accepted by the storage provider (signaled by the return of a CID) allows the user to submit a file (using curl for example) to the server; finally the user can then publish a signed deal to the storage provider. For more details, see the File Upload Demo.

The responsibility then falls on the storage provider to seal, publish and activate the deal on the Polka Storage parachain.

JSON-RPC API

The JSON-RPC endpoint exposes the following methods:

info — which returns information about the Storage Provider.

JSON-RPC Example

{
  "jsonrpc": "2.0",
  "id": 0,
  "method": "v0_info",
  "params": []
}
propose_deal — accepts a deal proposal and returns a CID for the file upload.

JSON-RPC Example

{
  "jsonrpc": "2.0",
  "id": 0,
  "method": "v0_propose_deal",
  "params": [
    {
      "piece_cid": "baga6ea4seaqj527iqfb2kqhy3tmpydzroiigyaie6g3txai2kc3ooyl7kgpeipi",
      "piece_size": 2048,
      "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
      "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
      "label": "",
      "start_block": 200,
      "end_block": 250,
      "storage_price_per_block": 500,
      "provider_collateral": 1250,
      "state": "Published"
    }
  ]
}
publish_deal — after a file has been uploaded, accepts a signed deal for publishing.
{
  "jsonrpc": "2.0",
  "id": 0,
  "method": "v0_publish_deal",
  "params": [
    {
      "deal_proposal": {
        "piece_cid": "baga6ea4seaqj527iqfb2kqhy3tmpydzroiigyaie6g3txai2kc3ooyl7kgpeipi",
        "piece_size": 2048,
        "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
        "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
        "label": "",
        "start_block": 100000,
        "end_block": 100050,
        "storage_price_per_block": 500,
        "provider_collateral": 1250,
        "state": "Published"
      },
      "client_signature": {
        "Sr25519": "c835a1c5215fc017067d30a8f49df0c643233881e57d8bd7232f695e1d28c748e8872b45712dcb403e28792cd1fb2b6161053b3344d4f6664bafec77349abd80"
      }
    }
  ]
}

HTTP API

The HTTP API exposes a single PUT method — /upload/<cid> where <cid> is the CID returned as a result of propose_deal.

Sealing Pipeline

As shown in the previous illustration, the sealing pipeline is responsible for gathering pieces into sectors, sealing said sectors and proving their storage. To achieve that, the pipeline is (currently) composed of 3 main stages.

The sealine pipeline, currently composed of the stages: Add Piece, Pre Commit and Prove Commit

Add Piece

The Add Piece stage gathers pieces into unsealed sectors, preparing them for the next steps.

Given we're currently only supporting sectors with 2KiB size, we're converting single pieces into sectors — when a piece comes in, we convert it to a single sector, without gathering multiple pieces.

Pre Commit

By itself, the Pre Commit has two inner stages — Pre Commit 1 (PC1) and Pre Commit 2 (PC2). PC1 is responsible for generating the Stacked Depth Robust Graph, while PC2 is responsible for handling the construction of the Merkle tree and proofs. After this process is completed, the Pre Commit information is submitted to the chain for verification.

Prove Commit

The Prove Commit stage is where the Proof of Replication is generated, after generation it is submitted to the network for validation and the sector is finally marked as Active, signaling that the Storage Provider has effectively stored the sector and is ready to start performing regular proof submissions.

Polka Storage pallets

  • storage-provider - A pallet that manages storage providers and their associated data.
  • market - A pallet that handles the storage market operations.
  • proofs - A pallet responsible for verifying PoRep and PoSt.
  • randomness - A pallet providing randomness source for blocks, mainly used by Proofs.

Overview

The Polka Storage parachain is all about making storage deals. Let us go over how a deal is done!

Before anything else, Storage Providers need to register themselves with the Storage Provider Pallet — they can do so using the register_storage_provider extrinsic.

Storage Provider registration

Now that storage providers can be registered in the storage provider pallet, we need to add some balance to both the Storage User's and the Provider's accounts, which is done using the Market's add_balance extrinsic.

Adding balance to Market accounts

Afterwards, storage users and providers negotiate data storage deals off-chain. Once a deal between the two parties is reached, the client can sign the deal and send it to the storage provider for publishing — the storage provider will then publish the signed deal using the publish_storage_deals extrinsic.

After publishing, the funds allocated for the deal will be moved from free to locked, and they can no longer be withdrawn until the deal has ended.

Publishing storage deals

At this point, the remaining responsibility is shifted to the storage provider, which needs to activate the deal. First, the storage provider needs to call get_randomness from the Randomness Pallet in order to create a replica and pre-commit the deal's sectors. The sealing and pre-committing takes some time, after that the storage provider needs to fetch yet another randomness seed to create a proof. Subsequently, they prove they stored the sectors by calling prove_commit_sectors extrinsics.

Verification is done via the Proofs Pallet and reported to the Market pallet to terminate the deal and apply penalties to the storage provider (remove and burn its collateral — i.e. locked funds) if they fail to activate the deal on time and return the funds to the client.

Deal activation

Suppose the deal has been completed successfully or is Active. In that case, the storage provider is now required to periodically submit proofs that they're still storing the user's data — the storage provider does this by calculating a proof and submitting it using submit_windowed_post.

Proving the data is still stored

Finally, storage providers can then settle deal payments to receive their fair share for keeping the user's data safe — using the settle_deal_payments extrinsic.

Settling deal payments

Putting it all together, we get the following:

The described flow

Market Pallet

Table of Contents

Overview

The purpose of the pallet is to manage storage deals between storage market participants and to track their funds. Market Pallet is tightly coupled with Storage Provider Pallet because it's a source of truth for deals. Storage Provider Pallet cannot exist without deal information from Market Pallet.

Extrinsics*

add_balance

Reserves a given amount of currency for usage in the Storage Market.

The reserved amount will be considered free until it is used in a deal when moved to locked and used to pay for the deal.

NameDescriptionType
amountThe amount to be reservedPositive integer

Example

Using the storagext-cli to add 1_000_000_0001 Plancks to Alice's account with the following command2:

storagext-cli --sr25519-key "//Alice" market add-balance 1000000000
1
This value is the minimum amount due to Polkadot's existential deposit.
More information available in: <https://support.polkadot.network/support/solutions/articles/65000168651-what-is-the-existential-deposit->.
2

Read more about the add-balance command in Storagext CLI/Subcommand market/add-balance.

withdraw_balance

Withdraws funds from the Storage Market.

The funds will be withdrawn from the free balance, meaning that the amount must be less than or equal to free and greater than 0 (\({free} \ge {amount} \gt 0\)).

NameDescriptionType
amountThe amount to be withdrawnPositive integer

Example

Using the storagext-cli to withdraw 10000 Plancks from Alice's free balance using the following command3:

storagext-cli --sr25519-key "//Alice" market withdraw-balance 10000
3

Read more about the withdraw-balance command in Storagext CLI/Subcommand market/withdraw-balance.

publish_storage_deals

Publishes list of deals to the chain.

This extrinsic must be called by a storage provider.

NameDescriptionType
proposalSpecific deal proposal, a JSON objectJSON object, specified in the deal proposal components section
client_signatureClient signature of this specific deal proposalMultiSignature, meaning a 64-byte array for Sr25519 and Ed25519 signatures and 65-byte array for ECDSA signatures

The client_signature, as the name indicates, is generated by the client by signing the deal proposal with their private key — the storagext-cli does this for the user automatically4. This signature ensures that the storage provider cannot forge a deal with an arbitrary client. The type of signature is dependent on the key the signer has, currently supported key types are Sr25519, ECDSA and Ed25519.

This step corresponds to the "sign & send proposal" step in the deal overview.

Deal Proposal Components

NameDescriptionType
piece_cidByte encoded CIDCID
piece_sizeSize of the piecePositive integer
clientSS58 address of the storage clientSS58 address
providerSS58 address of the storage providerSS58 address
labelArbitrary client chosen labelString, with a maximum length of 128 characters
start_blockBlock number on which the deal should startPositive integer
end_blockBlock number on which the deal should endPositive integer, end_block > start_block
storage_price_per_blockPrice for the storage specified per block5Positive integer, in Plancks
provider_collateralCollateral which is slashed if the deal failsPositive integer, in Plancks
stateDeal state. Can only be set to PublishedString

See the original Filecoin specification for details.

Example

Using the storagext-cli to publish deals with //Alice as the storage provider and //Charlie as the client by running the following command6:

storagext-cli --sr25519-key "//Alice" market publish-storage-deals \
  --client-sr25519-key "//Charlie" \
  "@deals.json"

Where deals.json is a file with contents similar to:

[
  {
    "piece_cid": "bafk2bzacecg3xxc4f2ql2hreiuy767u6r72ekdz54k7luieknboaakhft5rgk",
    "piece_size": 1337,
    "client": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "provider": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "label": "Super Cool (but secret) Plans for a new Polkadot Storage Solution",
    "start_block": 69,
    "end_block": 420,
    "storage_price_per_block": 15,
    "provider_collateral": 2000,
    "state": "Published"
  },
  {
    "piece_cid": "bafybeih5zgcgqor3dv6kfdtv3lshv3yfkfewtx73lhedgihlmvpcmywmua",
    "piece_size": 1143,
    "client": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "provider": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "label": "List of problematic (but flying) Boeing planes",
    "start_block": 1010,
    "end_block": 1997,
    "storage_price_per_block": 1,
    "provider_collateral": 3900,
    "state": "Published"
  }
]
Notice how the CLI command doesn't take the client_signature parameter but a keypair that can sign it.

We are aware that this is not secure. However, the system is still under development and is not final; it is a testing tool.

4

Take into account that the CLI is currently for demo purposes, the authors are aware that the command isn't safe since it requires the private keys of both parties.

5

The formula to calculate the total price is as follows: \[total\_price = (end\_block - start\_block) * storage\_price\_per\_block\].

6

Read more about the publish-storage-deals command in Storagext CLI/Subcommand market/publish-storage-deals.

settle_deal_payments

Settle specified deals between providers and clients.

Both clients and providers can call this extrinsic. However, since the settlement is the mechanism through which the provider gets paid, a client has no reason to call this extrinsic. Non-existing deal IDs will be ignored.

NameDescriptionType
deal_idsList of the deal IDs to be settledArray of integers

Example

Using the storagext-cli to settle deal payments for IDs 97, 1010, 1337 and 42069 using the following command7:

storagext-cli --sr25519-key "//Alice" market settle-deal-payments 97 1010 1337 42069
7

Read more about the settle-deal-payments command in Storagext CLI/Subcommand market/settle-deal-payments

Events

The Market Pallet emits the following events:

  • BalanceAdded - Indicates that some balance was added as free to the Market Pallet account for usage in the storage market.
    • who - SS58 address of then account which added balance
    • amount - Amount added
  • BalanceWithdrawn - Some balance was transferred (free) from the Market Account to the Participant's account.
    • who - SS58 address of the account which had withdrawn the balance
    • amount - Amount withdrawn
  • DealPublished - Indicates that a deal was successfully published with publish_storage_deals.
    • deal_id - Unique deal ID
    • client - SS58 address of the storage client
    • provider - SS58 address of the storage provider
  • DealActivated - Deal's state has changed to Active.
    • deal_id - Unique deal ID
    • client - SS58 address of the storage client
    • provider - SS58 address of the storage provider
  • DealsSettled - Published after the settle_deal_payments extrinsic is called. Indicates which deals were successfully and unsuccessfully settled.
    • successful - List of deal IDs that were settled
    • unsuccessful - List of deal IDs with the corresponding errors
  • DealSlashed - Is emitted when some deal expired.
    • deal_id - Deal ID that was slashed
  • DealTerminated - A deal was voluntarily or involuntarily terminated.
    • deal_id - Terminated deal ID
    • client - SS58 address of the storage client
    • provider - SS58 address of the storage provider

Errors

The Market Pallet actions can fail with following errors:

  • InsufficientFreeFunds - Market participants do not have enough free funds.
  • NoProposalsToBePublished - publish_storage_deals was called with an empty list of deals.
  • ProposalsPublishedByIncorrectStorageProvider - Is returned when calling publish_storage_deals and the deals in a list are not published by the same storage provider.
  • AllProposalsInvalid - publish_storage_deals call was supplied with a list of deals which are all invalid.
  • DuplicateDeal - There is more than one deal with this ID in the Sector.
  • DealNotFound - Tried to activate a deal that is not in the system.
  • DealActivationError - Tried to activate a deal, but data was malformed.
    • Invalid specified provider.
    • The deal already expired.
    • Sector containing the deal expires before the deal.
    • Invalid deal state.
    • Deal is not pending.
  • DealsTooLargeToFitIntoSector - Sum of all deals piece sizes for a sector exceeds sector size. The sector size is based on the registered proof type. We currently only support registered StackedDRG2KiBV1P1 proofs, which have 2KiB sector sizes.
  • TooManyDealsPerBlock - Tried to activate too many deals at a given start_block.
  • UnexpectedValidationError - publish_storage_deals's core logic was invoked with a broken invariant that should be called by validate_deals. Please report an issue to the developers.
  • DealPreconditionFailed - Due to a programmer bug. Please report an issue to the developers.

Constants

NameDescriptionValue
MaxDealsHow many deals can be published in a single batch of publish_storage_deals.128
MaxDealsPerBlockMaximum deals that can be scheduled to start at the same block.128
MinDealDurationMinimum time an activated deal should last.5 Minutes (50 Blocks)
MaxDealDurationMaximum time an activated deal should last.180 Minutes (1800 Blocks)

Storage Provider Pallet

Table of Contents

Overview

The Storage Provider Pallet handles the creation of storage providers and facilitates storage providers and clients in creating storage deals. Storage providers must provide the Proof of Space-time (PoSt) and the Proof of Replication (PoRep) to the Storage Provider Pallet to prevent the pallet from imposing penalties on storage providers through slashing.

Usage

Declaring storage faults and recoveries

Faulty sectors are subject to penalties. To minimize said penalties, the storage provider should declare any sector for which they cannot generate a PoSt as faulty, this will mask said sectors in future deadlines, minimizing the suffered penalties. A storage provider must declare the sector as faulty before the challenge window.

Through the declare_faults and declare_faults_recovered extrinsics the storage provider can declare sectors as faulty or recovered1.

Declaring faults and recoveries
1

Recovered sectors still require being proven before they can become fully active again.

Substrate pallet hooks execute some actions when certain conditions are met. We use these hooks — when a block finalizes — to check if storage providers are up to date with their proofs. If a storage provider fails to submit proof on time, the Storage Provider pallet will signal the Market pallet to penalize the storage provider. Accordingly, removing and burning the collateral locked up during the pre-commit.

Extrinsics

register_storage_provider

Storage Provider registration is the first extrinsic any storage provider must call. Without being registered, other extrinsics will return an error.

Before a storage provider can register, they must set up a PeerId. This PeerId is used in the p2p network to connect to the storage provider.

NameDescriptionType
peer_idlibp2p IDHex string of the PeerId bytes
window_post_proof_typeProof type the storage provider usesString, currently only StackedDRGWindow2KiBV1P1 is available

Example

Registering a storage provider with keypair //Alice and peer ID alice with the following command2:

storagext-cli --sr25519-key "//Alice" storage-provider register alice
2

Read more about the register command in Storagext CLI/Subcommand storage-provider/register

pre_commit_sectors

After publishing a deal, the storage provider needs to pre-commit the sector information to the chain. Sectors are not valid after pre-commit. The sectors need to be proven first. The pre-commit extrinsic takes in an array of the following values:

NameDescriptionType
seal_proofSeal proof type this storage provider is using 3String, currently only StackedDRGWindow2KiBV1P1 is available
sector_numberThe sector number that is being pre-committedPositive integer
sealed_cidCommitment of replicationHex string of the sealed CID bytes
deal_idsDeal IDs to be pre-committed, from publish_storage_dealsArray of integers
expirationExpiration block of the pre-committed sectorPositive integer
unsealed_cidCommitment of data sector sealingHex string of the unsealed CID bytes
Sectors are not valid after pre-commit. The sectors need to be proven first.
3

Only one seal-proof type supported at the moment, 2KiB.

Example

Storage provider //Alice pre-committing4 a sector number 1, with a single deal ID 0.

storagext-cli --sr25519-key "//Alice" storage-provider pre-commit @pre-commit-sector.json

Where pre-commit-sector.json is a file with contents similar to:

{
  "sector_number": 1,
  "sealed_cid": "bafk2bzaceajreoxfdcpdvitpvxm7vkpvcimlob5ejebqgqidjkz4qoug4q6zu",
  "deal_ids": [0],
  "expiration": 100,
  "unsealed_cid": "bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi",
  "seal_proof": "StackedDRG2KiBV1P1"
}
4

Read more about the pre-commit command in Storagext CLI/Subcommand storage-provider/pre-commit

prove_commit_sectors

After pre-committing some new sectors the storage provider needs to supply a Proof-of-Replication for these sectors 3. The prove-commit extrinsic takes in an array of the following values:

NameDescriptionType
sector_numberThe sector number that is being prove-committedPositive integer
proofThe proof of replicationHex string of the proof bytes
3

At the moment, any proof of non-zero length is accepted for PoRep.

Example

This example follows up on the pre-commit example. Storage provider //Alice is proven committing5 sector number 1.

storagext-cli --sr25519-key "//Alice" storage-provider prove-commit @prove-commit-sector.json

Where prove-commit-sector.json is a file with contents similar to:

{
  "sector_number": 1,
  "proof": "1230deadbeef"
}
5

Read more about the prove-commit command in Storagext CLI/Subcommand storage-provider/prove-commit

submit_windowed_post

A storage provider needs to periodically submit a Proof-of-Spacetime to prove that they are still storing the data they promised. Multiple proofs can be submitted at once.

NameDescriptionType
deadlineThe deadline index which the submission targetsPositive integer
partitionsThe partitions being provenArray of positive integers
post_proofThe proof type, should be consistent with the proof type for registrationString, currently only StackedDRGWindow2KiBV1P1 is available
proof_bytesThe proof submission, to be checked in the storage provider pallet.Hex string of the proof bytes

Example

Storage provider //Alice submitting6 proof for deadline 0, partition 0.

storagext-cli --sr25519-key "//Alice" storage-provider submit-windowed-post @submit-windowed-post.json

Where submit-windowed-post.json is a file with contents similar to:

{
  "deadline": 0,
  "partition": [0],
  "proof": {
    "post_proof": "2KiB",
    "proof_bytes": "1230deadbeef"
  }
}
6

Read more about the submit-windowed-post command in Storagext CLI/Subcommand storage-provider/submit-windowed-post

declare_faults

A storage provider can declare faults when they know they cannot submit PoSt on time to prevent getting penalized. Faults have an expiry of 42 days. The sectors will be terminated if the faults have not been recovered before this time. Multiple faults can be declared at once.

declare_faults can take in multiple fault declarations:

NameDescriptionType
faultsThe fault declarationsArray of the fault declarations, described below

Where the fault declarations contain:

NameDescriptionType
deadlineThe deadline to which the faulty sectors are assignedPositive integer
partitionPartition index within the deadline containing the faulty sectors.Positive integer
sectorsSectors in the partition being declared faultySet of positive integers

Example

Storage provider //Alice declaring faults7 on deadline 0, partition 0, sector 1.

storagext-cli --sr25519-key "//Alice" storage-provider declare-faults @fault-declaration.json

Where fault-declaration.json is a file with contents similar to:

[
  {
    "deadline": 0,
    "partition": 0,
    "sectors": [1]
  }
]
7

Read more about the declare-faults command in Storagext CLI/Subcommand storage-provider/declare-faults

declare_faults_recovered

After declaring sectors as faulty, a storage provider can recover them. The storage provider must recover the faults if the system has marked some sectors as faulty due to a missing PoSt. Faults are not fully recovered until the storage provider submits a valid PoSt after the declare_faults_recovered extrinsic.

declare_faults_recovered can take in multiple fault recoveries:

NameDescriptionType
recoveriesThe fault recoveriesArray of the fault recovery declarations, described below

Where the fault recoveries contain:

NameDescriptionType
deadlineThe deadline to which the recovered sectors are assignedPositive integer
partitionPartition index within the deadline containing the recovered sectorsPositive integer
sectorsSectors in the partition being declared recoveredSet of positive integers

Example

Storage provider //Alice declaring recoveries8 on deadline 0, partition 0, sector 1.

storagext-cli --sr25519-key "//Alice" storage-provider declare-faults-recovered @fault-declaration.json

Where fault-declaration.json is a file with contents similar to:

[
  {
    "deadline": 0,
    "partition": 0,
    "sectors": [1]
  }
]
8

Read more about the declare-faults-recovered command in Storagext CLI/Subcommand storage-provider/declare-faults-recovered

terminate_sectors

A storage provider can terminate sectors with the terminate_sectors extrinsic. This requires the storage provider to have no unproven sectors. terminate_sectors can process multiple terminations in a single extrinsic.

NameDescriptionType
terminationsThe sectors and partitions to terminateAn array of termination declarations.

Where the termination declarations contain:

NameDescriptionType
deadlineThe deadline the termination is targetingPositive integer
partitionPartition index within the deadline containing the sector to be terminatedPositive integer
sectorsSectors in the partition being terminatedSet of positive integers

Example

Storage provider //Alice terminating sectors9 on deadline 0, partition 0, sector 1.

storagext-cli --sr25519-key "//Alice" storage-provider terminate-sectors @terminate-sectors.json

Where terminate-sectors.json is a file with contents similar to:

[
  {
    "deadline": 0,
    "partition": 0,
    "sectors": [1]
  }
]
9

Read more about the terminate-sectors command in Storagext CLI/Subcommand storage-provider/terminate-sectors

Events

The Storage Provider Pallet emits the following events:

  • StorageProviderRegistered - Indicates that a new storage provider has been registered.
    • owner - SS58 address of the storage provider.
    • info - The static information about the new storage provider. This information includes:
      • peer_id - Libp2p identity that should be used when connecting to the storage provider.
      • window_post_proof_type - The proof type used by the storage provider for sealing sectors.
      • sector_size - Amount of space in each sector committed to the network by the storage provider.
      • window_post_partition_sectors - The number of sectors in each Window PoSt partition (proof).
  • SectorsPreCommitted - A storage provider has pre-committed some new sectors after publishing some new deal.
    • owner - SS58 address of the storage provider.
    • sectors - An array with information about the sectors that are pre-committed. This information includes:
      • seal_proof - The seal proof type used.
      • sector_number - The sector number that is pre-committed.
      • sealed_cid - Commitment of replication.
      • deal_ids - Deal IDs that are activated during pre-commit.
      • expiration - Expiration of the pre-committed sector.
      • unsealed_cid - Commitment of data.
  • SectorsProven - A storage provider has proven a sectors that they previously pre-committed.
    • owner - SS58 address of the storage provider.
    • sectors - An array with information about the sectors that are proven. This information includes:
      • sector_number - The sector number that is proven.
      • partition_number - The partition number the proven sector is in.
      • deadline_idx - The deadline index assigned to the proven sector.
  • SectorSlashed - A previously pre-committed sector, but not proven, has been slashed by the system because it has expired.
    • owner - SS58 address of the storage provider.
    • sector_number - The sector number that has been slashed because of expiry.
  • ValidPoStSubmitted - A valid PoSt has been submitted by a storage provider.
    • owner - SS58 address of the storage provider.
  • FaultsDeclared - A storage provider has declared some sectors as faulty.
    • owner - SS58 address of the storage provider.
    • faults - An array with information about the fault declarations. This information includes:
      • deadline - The deadline to which the faulty sectors are assigned.
      • partition - Partition number within the deadline containing the faulty sectors.
      • sectors - Sectors in the partition being declared as faulty.
  • FaultsRecovered - A storage provider has recovered some sectors previously declared as faulty.
    • owner - SS58 address of the storage provider.
    • recoveries - An array with information about the fault recoveries. This information includes:
      • deadline - The deadline to which the recovered sectors are assigned.
      • partition - Partition number within the deadline containing the recovered sectors.
      • sectors - Sectors in the partition being declared as recovered.
  • PartitionFaulty - It was detected that a storage provider has not submitted their PoSt on time and has marked some sectors as faulty.
    • owner - SS58 address of the storage provider.
    • partition - Partition number for which the PoSt was missed.
    • sectors - The sectors in the partition declared faulty by the system.
  • SectorsTerminated - A storage provider has terminated some sectors.
    • owner - SS58 address of the storage provider.
    • terminations - An array with information about the terminated sectors. This information includes:
      • deadline - The deadline to which the terminated sectors were assigned.
      • partition - The partition number within the deadline containing the terminated sectors.
      • sectors - The sectors in the partition that have been terminated.

Errors

The Storage Provider Pallet actions can fail with the following errors:

  • StorageProviderExists - A storage provider is already registered and tries to register again.
  • StorageProviderNotFound - This error is emitted by all extrinsics except registration in the storage provider pallet when a storage provider tries to call an extrinsic without registering first.
  • InvalidSector - This error can be emitted when:
    • A storage provider supplies a sector number during pre-commit exceeding the maximum number of sectors.
    • A storage provider supplies a sector number during proof commit that exceeds the maximum amount of sectors.
  • InvalidProofType - This error can be emitted when:
    • A storage provider submits a seal-proof type during pre-commit that is different than the one configured during registration.
    • During a prove commit extrinsic, the proof type supplied by the storage provider is invalid.
    • A storage provider submits a windowed PoSt proof type that is different from the one configured during registration.
  • NotEnoughFunds - Emitted when a storage provider does not have enough funds for the pre-commit deposit.
  • SectorNumberAlreadyUsed - A storage provider tries to pre-commit a sector number that has already been used.
  • ExpirationBeforeActivation - A storage provider tries to pre-commit a sector where that sector expires before activation.
  • ExpirationTooSoon - A storage provider tries to pre-commit a sector with a total lifetime less than MinSectorExpiration.
  • ExpirationTooLong - A storage provider tries to pre-commit a sector with an expiration that exceeds MaxSectorExpiration.
  • MaxSectorLifetimeExceeded - A storage provider tries to pre-commit a sector with a total lifetime that exceeds SectorMaximumLifetime.
  • InvalidCid - Emitted when a storage provider submits an invalid unsealed CID when trying to pre-commit a sector.
  • ProveCommitAfterDeadline - A storage provider has tried to prove a previously pre-committed sector after the proving deadline.
  • PoStProofInvalid - A proof that the storage provider submitted is invalid. Currently, this error is emitted when the proof length is 0.
  • InvalidUnsealedCidForSector - This error is emitted when the declared unsealed_cid for pre_commit is different from the one calculated by the system.
  • FaultDeclarationTooLate - A fault declaration was submitted after the fault declaration cutoff. The fault declaration can be submitted after the upcoming deadline is closed.
  • FaultRecoveryTooLate - A fault recovery was submitted after the fault recovery cutoff. The fault recovery can be submitted after the upcoming deadline is closed.
  • CouldNotTerminateDeals - Emitted when trying to terminate sector deals fails.
  • InvalidDeadlineSubmission - Emitted when an error occurs when submitting PoSt.
  • CouldNotVerifySectorForPreCommit - Failure during pre-commit due to the commd calculation failing due to a programming error. Please report an issue to the developers.
  • SlashingFailed - Slashing of funds fails due to a programmer error. Please report an issue to the developers.
  • ConversionError - Due to a programmer error. Please report an issue to the developers.
  • GeneralPalletError - An error ocurred in on of the pallet modules. These errors can be:
    • PartitionErrorFailedToAddSector - Emitted when adding sectors fails.
    • PartitionErrorDuplicateSectorNumber - Emitted when trying to add a sector number that has already been used in this partition.
    • PartitionErrorFailedToAddFaults - Emitted when adding in the partition faults fails.
    • PartitionErrorSectorsNotLive - Emitted when trying to remove sectors that are not live.
    • PartitionErrorFailedToRemoveRecoveries - Emitted when removing recovering sectors from the partition fails.
    • PartitionErrorUnexpectedRecoveries - Emitted when encountering unexpected recoveries while popping expired sectors.
    • PartitionErrorExpiredSectorsAlreadyTerminated - Emitted when trying to pop expired sectors that are already terminated.
    • DeadlineErrorDeadlineIndexOutOfRange - Emitted when the passed in deadline index supplied for submit_windowed_post is out of range.
    • DeadlineErrorDeadlineNotFound - Emitted when a trying to get a deadline index but fails because that index does not exist.
    • DeadlineErrorCouldNotConstructDeadlineInfo - Emitted when constructing DeadlineInfo fails.
    • DeadlineErrorPartitionAlreadyProven - Emitted when a proof is submitted for a partition that is already proven.
    • DeadlineErrorPartitionNotFound - Emitted when trying to retrieve a partition that does not exit.
    • DeadlineErrorProofUpdateFailed - Emitted when trying to update proven partitions fails.
    • DeadlineErrorMaxPartitionsReached - Emitted when max partition for a given deadline have been reached.
    • DeadlineErrorCouldNotAddSectors - Emitted when trying to add sectors to a deadline fails.
    • DeadlineErrorSectorsNotFound - Emitted when trying to use sectors which haven't been prove committed yet.
    • DeadlineErrorSectorsNotFaulty - Emitted when trying to recover non-faulty sectors.
    • DeadlineErrorCouldNotAssignSectorsToDeadlines - Emitted when assigning sectors to deadlines fails.
    • DeadlineErrorFailedToUpdateFaultExpiration - Emitted when trying to update fault expirations fails.
    • StorageProviderErrorMaxPreCommittedSectorExceeded - Happens when an SP tries to pre-commit more sectors than SECTOR_MAX.
    • StorageProviderErrorSectorNotFound - Happens when trying to access a sector that does not exist.
    • StorageProviderErrorSectorNumberInUse - Happens when a sector number is already in use.
    • SectorMapErrorFailedToInsertSector - Emitted when trying to insert sector(s) fails.
    • SectorMapErrorFailedToInsertPartition - Emitted when trying to insert partition fails.
    • ExpirationQueueErrorExpirationSetNotFound - Expiration set not found.
    • ExpirationQueueErrorSectorNotFound - Sector not found in expiration set.
    • ExpirationQueueErrorInsertionFailed - Insertion into the expiration queue failed.

Pallet constants

The Storage Provider Pallet has the following constants:

NameDescriptionValue
WPoStProvingPeriodThe average period for proving all sectors maintained by a storage provider.4 Minutes (40 Blocks)
WPoStChallengeWindowThe period immediately before a deadline during which a challenge can be generated by the chain and the requisite proofs computed.2 Minutes (20 Blocks)
WPoStChallengeLookBackThis period allows the storage providers to start working on the PoSt before the deadline is officially opened to receiving a PoSt.1 Minute (10 Blocks)
WPoStPeriodDeadlinesRepresents how many challenge deadlines there are in one proving period. Closely tied to WPoStChallengeWindow.48
MinSectorExpirationMinimum time past the current block a sector may be set to expire.5 Minutes (50 Blocks)
MaxSectorExpirationMaximum time past the current block a sector may be set to expire.60 Minutes (600 Blocks)
SectorMaximumLifetimeMaximum time a sector can stay in pre-committed state.120 Minutes (1200 Blocks)
MaxProveCommitDurationMaximum time between pre-commit and proving the committed sector.5 Minutes (50 Blocks)
MaxPartitionsPerDeadlineMaximum number of partitions that can be assigned to a single deadline.3000
FaultMaxAgeMaximum time a fault can exist before being removed by the pallet.210 Minutes (2100 Blocks)
FaultDeclarationCutoffTime before a deadline opens that a storage provider can declare or recover a fault.2 Minutes (20 Blocks)

Proofs Pallet

Table of Contents

Overview

The Proofs Pallet handles all the logic related to verifying PoRep and PoSt proofs on-chain. It's called by Storage Provider Pallet when verifying proofs during the extrinsics prove_commit_sectors and submit_windowed_post. The Pallet DOES NOT expose any extrinsic for proofs verification, it only implements a trait that can be coupled to other pallets.

To verify the proofs properly it needs to have the verifying keys parameters set for the sector size via set_porep_verifying_key and set_post_verifying_key.

Usage

This pallet can only be directly used via the trait primitives_proofs::ProofVerification. However, for the trait to work and not fail with Error::MisingPoRepVerifyingKey/Error::MissingPoStVerifingKey, the verifying keys need to be set via extrinsics set_porep_verifying_key/set_post_verifying_key.

Ideally, users shouldn't worry about it, as it will be set by the governance during a trusted setup procedure and then Storage Providers will download the proof generation parameters. However, in the MVP phase, those keys need to be set with the extrinsics after starting a testnet.

Verifying Keys are set for a Sector Size once and then shared across all proof verifications. Currently, the network only supports 2KiB sector sizes, so parameters need to be generated and set for it.

Extrinsics

set_porep_verifying_key

Verifying Key is a set of shared parameters used for zk-SNARK proof verification. It can be generated via polka-storage-provider-client proofs porep-params command. The verifying key used in the verification must match the proving parameters used in the proof generation.

The extrinsic sets the verifying key received in the SCALE-encoded format and then uses it for all the subsequent verification. Verifying Key is used to verify every PoRep proof across the network.

NameDescriptionType
verifying_keyshared set of parameters used for zk-SNARK proof verificationSCALE encoded bytes of a Verifying Key

Example

Setting a verifying key from the 1 //Alice account where proof is stored in the ./2KiB.porep.vk.scale file.

storagext-cli --sr25519-key "//Alice" proofs set-porep-verifying-key 2KiB.vk.scale
1

Note that in the MVP every account can set a Verifying Key. It's a risky operation that can halt the entire network, because if verifying key changes, Storage Providers needs to update their generating parameters as well.

set_post_verifying_key

Verifying Key is a set of shared parameters used for zk-SNARK proof verification. It can be generated via polka-storage-provider-client proofs post-params command. The verifying key used in the verification must match proving parameters used in the proof generation.

The extrinsic sets the verifying key received in the SCALE-encoded format and then uses it for all the subsequent verification. Verifying Key is used to verify every PoSt proof across the network.

NameDescriptionType
verifying_keyshared set of parameters used for zk-SNARK proof verificationSCALE encoded bytes of a Verifying Key

Example

Setting a verifying key from the 1 //Alice account where proof is stored in the ./2KiB.post.vk.scale file.

storagext-cli --sr25519-key "//Alice" proofs set-post-verifying-key 2KiB.vk.scale
1

Note that in the MVP every account can set a Verifying Key. It's a risky operation that can halt the entire network, because if verifying key changes, Storage Providers needs to update their generating parameters as well.

Events

The Proofs Pallet emits the following events:

  • PoRepVerifyingKeyChanged - PoRep verifying key has been changed.
    • who - SS58 address of the caller.
  • PoStVerifyingKeyChanged - PoSt verifying key has been changed.
    • who - SS58 address of the caller.

Errors

The Proofs Pallet actions can fail with the following errors:

  • InvalidVerifyingKey - supplied Verifying Key was not in the valid format and could not be deserialized.
  • InvalidPoRepProof - PoRep proof could not be verified, it was not created for the given replica window.
  • InvalidPoStProof - PoSt proof could not be verified, it was not created for the given sector.
  • MissingPoRepVerifyingKey - tried to verify PoRep proof, but the PoRep verifying key was not set previously with the set_porep_verifying_key extrinsic.
  • MissingPoStVerifyingKey - tried to verify PoSt proof, but the PoSt verifying key was not set previously with the set_post_verifying_key extrinsic.
  • Conversion - PoRep/PoSt Proof/VerifyingKey are in an invalid format and cannot be deserialized.

Randomness Pallet

Table of Contents

Overview

The randomness pallet saves a random seed for each block when it's finalized and allows retrieval of this randomness at a later time. There is a limitation - the randomness is available only after the 81st block of the chain, due to randomness predictability earlier. Currently, the seeds are used for the sealing pipeline's pre-commit and prove commit, in other words generating a replica and proving a sector.

Usage

This pallet exposes the interface to get randomness on-chain for a certain block via the trait primitives_proofs::Randomness or chain state query pallet_randomness:SeedsMap. Note that, you can only get a randomness for the current_block - 1 and depending on the configuration, the old randomness seed will be removed after the associated block has passed.

Extrinsics

The pallet does not expose any extrinsics.

Events

The pallet does not emit any events.

Errors

The Randomness Pallet actions can fail with the following errors:

  • SeedNotAvailable - the seed for the given block number is not available, which means the randomness pallet has not gathered randomness for this block yet.

Constants

The Randomness Pallet has the following constants:

NameDescriptionValue
CleanupIntervalClean-up interval specified in number of blocks between cleanups.1 Day
SeedAgeLimitThe number of blocks after which the seed is cleaned up.30 Days

Faucet Pallet

Table of Contents

Overview

The Faucet Pallet enables users to drip funds into their wallet to start using the polka-storage chain.

The faucet pallet only exists on the testnet. Any of the funds dripped do not have any real-world value.

Usage

The Faucet Pallet is used to get funds on testnet into an externally generated account.

Only 1 drip per 24 hours per account is allowed. When trying to drip more often than once per 24 hours the transaction will be rejected.

Extrinsics

drip

The drip extrinsic is an unsigned extrinsic (or inherit) with no gas fees. This means that any account can get funds, even if their current balance is 0.

NameDescriptionType
accountThe target account to transfer funds toSS58 address

Example

storagext-cli faucet drip 5GpRRVXgPSoKVmUzyinpJPiCjfn98DsuuHgMV2f9s5NCzG19

Events

The Faucet Pallet only emits a single event:

  • Dripped - Emits what account was dripped to and at what block number.
    • who - SS58 address of the dripped account.
    • when - Block at which the drip occurred.

Errors

The Faucet Pallet actions can fail with the following errors:

  • FaucetUsedRecently - the provided account had funds dripped within the last 24 hours.

Constants

The Faucet Pallet has the following constants:

NameDescriptionValue
FaucetDripAmountThe amount that is dispensed in planck's.10_000_000_000_000
FaucetDripDelayHow often an account can be topped up.1 Day

Getting Started

This chapter goes through the process of setting up, running, and trying out the components implemented so far.

System Requirements

Before proceeding with the setup, please ensure the host system meets the following requirements:

  • OS: Linux x86_64/MacOS ARM x64
  • RAM: Minimum 8GB, recommended 16GB or more

Guides

Building

The following chapters will cover how to build the Polka Storage parachain using multiple methods.

Quick reminder that Windows is not part of the supported operating systems. As such, the following guides have not been tested on Windows.

We'll be building 5 artifacts:

  • polka-storage-node — the Polka Storage Polkadot parachain node.
  • polka-storage-provider-server — the Polka Storage Storage Provider, responsible for accepting deals, storing files and executing storage proofs.
  • polka-storage-provider-client — the Polka Storage Storage Provider client, this CLI tool has several utilities to interact with the Storage Provider server, such as proposing and publishing deals, as well as some wallet utilities and proof demos.
  • mater-cli — the Mater CLI enables you to convert files into CARv2 archives, an essential part of preparing files for submission to the network.
  • storagext-cli — the Storagext CLI is a lower-level tool to manually run the Polka Storage extrinsics.

To build these artifacts, we provide two main methods:

Building from source

This guide will outline how to setup your environment to build the Polka Storage parachain, we cover how to build the binaries directly on your system or using Nix to ease the process.

Get the code

To get started, first clone the repository and enter the repository's directory:

git clone git@github.com:eigerco/polka-storage.git
cd polka-storage

System dependencies

To build the binaries directly on your system you will need the following tools:

  • Rust 1.81.0 — you can install it using rustup and its guide for help.
  • Other dependencies — keep reading, we'll get to it after the end of this list!
  • just (optional) — (after installing Rust) you can use cargo install just or check the official list of packages.

The dependencies mentioned are for Linux distros using the apt family of package managers. Different systems may use different package managers, as such, they may require you to find the equivalent package.

To install the required dependencies run the following commands:

$ sudo apt update
$ sudo apt install -y libhwloc-dev \
    opencl-headers \
    ocl-icd-opencl-dev \
    protobuf-compiler \
    clang \
    build-essential \
    git \
    curl

Using Nix

You can use Nix to simplify the building process, if you're just taking the network for test-drive this is a great method to get started.

Nix will take care of setting up all the dependencies for you! If you're curious, you can read more about using Nix in fasterthanlime's blog, the official Nix guide or Determinate Systems' Zero to Nix guide.

Binaries built using Nix will not work on other systems since they will be linked with Nix specific paths.

Pre-requisites

If you're using direnv, when going into the cloned directory for the first time nix will activate automatically and install the required packages, this make take some time.

If you're not using direnv, you will need to run nix develop to achieve the same effect — for more information refer to the official Nix guide — https://nix.dev/manual/nix/2.17/command-ref/new-cli/nix3-develop.

Building

After all this setup, it is time to start building the binaries, which you can do manually using the following command:

When building polka-storage-node you should add --features polka-storage-runtime/testnet which enables the testnet configuration; all the code in the repo is currently targeting this feature, not including it may lead to unexpected behavior.

When building storagext-cli you may want to add --features storagext/insecure_url which enables using non-TLS HTTP and WebSockets.

cargo build --release -p <BINARY-NAME>

Where <BINARY-NAME> is one of:

  • polka-storage-node
  • polka-storage-provider-client
  • polka-storage-provider-server
  • storagext-cli
  • mater-cli

For more information on what each binary does, refer to Building.

Just recipes

To simplify the building process, we've written some Just recipes.

CommandDescription
build-polka-storage-nodeBuilds the Polka Storage parachain node.
build-polka-storage-provider-serverBuilds the Storage Provider server binary.
build-polka-storage-provider-clientBuilds the Storage Provider client binary.
build-storagext-cliBuilds the storagext CLI used to execute extrinsics.
build-mater-cliBuilds the mater CLI which is used by storage clients to convert files to CARv2 format and extract CARv2 content.
build-binaries-allBuilds all the binaries above, this may take a while (but at least cargo reuses artifacts).

Running

After building the desired binaries, you can find them under the target/release folder (or target/debug if you didn't use the -r/--release flag).

Assuming you're in the project root, you can run them with the following command:

$ target/release/<BINARY-NAME>

Where <BINARY-NAME> is one of:

  • polka-storage-node
  • polka-storage-provider-server
  • polka-storage-provider-client
  • mater-cli
  • storagext-cli

Additionally, you can move them to a folder under your $PATH and run them as you would with any other binary.

Docker Setup

This guide will outline how to setup your environment using Docker to get started with the Polka Storage parachain.

Pre-requisites

Install Docker on your system by following the Docker install instructions.

Using Podman instead of Docker may work, however, we do not support Podman!

Dockerfile setup

All docker builds are composed of 4 stages.

  1. Set up cargo chef, this caches the Rust dependencies for faster builds.
  2. Planning — cargo chef analyzes the current project to determine the minimum subset of file required to build it an cache the dependencies.
  3. Build — cargo chef checks the project skeleton identified in the planner stage and builds it to cache dependencies.
  4. Runtime — sets up the runtime with Debian and imports the binary build in the previous stage.

Building & Running

Clone the repository and go into the directory:

git clone git@github.com:eigerco/polka-storage.git
cd polka-storage

You can find Dockerfiles for each binary under the docker/ folder. To build the images manually you can use the following command:

docker build \
        --build-arg VCS_REF="$(git rev-parse HEAD)" \
        --build-arg BUILD_DATE="$(date -u +'%Y-%m-%dT%H:%M:%SZ')" \
        -t <DOCKERFILE-NAME>:"$(cargo metadata --format-version=1 --no-deps | jq -r '.packages[0].version')" \
        --file ./docker/dockerfiles/<DOCKERFILE-NAME>.Dockerfile \
        .

Where you can replace <DOCKERFILE-NAME> by one of the following:

  • polka-storage-node
  • polka-storage-provider-server
  • polka-storage-provider-client
  • mater-cli
  • storagext-cli

To run the images manually, you apply the same pattern to the following command:

docker run -it polkadotstorage.azurecr.io/<DOCKERFILE-NAME>:"$(cargo metadata --format-version=1 --no-deps | jq -r '.packages[0].version')"

Just recipes

To simplify the building process, we've written some Just recipes.

Build recipes

CommandDescription
build-mater-dockerBuilds the mater CLI image which is used by storage clients to convert files to CARv2 format and extract CARv2 content.
build-polka-storage-node-dockerBuilds the Polka Storage parachain node image.
build-polka-storage-provider-server-dockerBuilds the Storage Provider server image.
build-polka-storage-provider-client-dockerBuilds the Storage Provider client image.
build-storagext-dockerBuilds the storagext CLI image used to execute extrinsics.
build-docker-allBuilds all the images above, this might take a while to complete.

Running recipes

CommandDescription
run-mater-dockerRuns the image, opening a shell with access to the mater-cli binary.
run-polka-storage-node-dockerRuns the polka-storage-node inside the built Docker image.
run-polka-storage-provider-server-dockerRuns the image, opening a shell with access to the polka-storage-provider-server binary.
run-polka-storage-provider-client-dockerRuns the image, opening a shell with access to the polka-storage-provider-client binary.
run-storagext-dockerRuns the image, opening a shell with access to the storagext-cli binary.

Local Testnet - Polka Storage Parachain

This guide helps to set up a local parachain network using zombienet. At the end, we will have three nodes: Alice, Bob and Charlie. Alice and Bob will be running Polkadot relay chain nodes as validators, and Charlie will run a relay chain and parachain node. Charlie will be our contact point to the parachain network.

Native Binaries

The binaries for the latest releases are available to download and can be run without any additional dependencies. We support Linux x86_64 and MacOS ARM x64. The commands below will download:

  • Relay Chain binaries (polkadot, polkadot-prepare-worker, polkadot-execute-worker),
  • Polka Storage Parachain binary (polka-storage-node),
  • Polka Storage Provider internal node (polka-storage-provider-server),
  • Polka Storage Provider Client internal node's RPC client and proving tools (polka-storage-provider-client),
  • Storagext CLI to interact with a chain by sending extrinsics (storagext-cli),
  • Mater CLI for CARv2 file archive operations (mater-cli),
  • zombienet to spawn local testnets and orchestrate them (zombienet),
  • Polka Storage Parachain out-of-the-box zombienet's configuration (polka-storage-testnet.toml).

Linux x86_64

  1. Download the binaries:
wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-stable2407-1/polkadot
wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-stable2407-1/polkadot-prepare-worker
wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-stable2407-1/polkadot-execute-worker
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-node-v0.0.0/polka-storage-node-linux-x86 -O polka-storage-node
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-provider-client-v0.1.0/polka-storage-provider-client-linux-x86 -O polka-storage-provider-client
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-provider-server-v0.1.0/polka-storage-provider-server-linux-x86 -O polka-storage-provider-server
wget https://github.com/eigerco/polka-storage/releases/download/storagext-cli-v0.1.0/storagext-cli-linux-x86 -O storagext-cli
wget https://github.com/eigerco/polka-storage/releases/download/mater-cli-v0.1.0/mater-cli-linux-x86 -O mater-cli
wget https://github.com/paritytech/zombienet/releases/download/v1.3.106/zombienet-linux-x64 -O zombienet
  1. Setup permissions:
chmod +x zombienet polka-storage-node polka-storage-provider-client polka-storage-provider-server storagext-cli mater-cli polkadot polkadot-prepare-worker polkadot-execute-worker
  1. Run zombienet:
export PATH=$(pwd):$PATH

wget https://s3.eu-central-1.amazonaws.com/polka-storage/polka-storage-testnet.toml
zombienet -p native spawn polka-storage-testnet.toml

MacOS ARM

  1. Download the binaries:
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-node-v0.0.0/polkadot-2407-1-macos-arm64 -O polkadot
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-node-v0.0.0/polkadot-2407-1-prepare-worker-macos-arm64 -O polkadot-prepare-worker
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-node-v0.0.0/polkadot-2407-1-execute-worker-macos-arm64 -O polkadot-execute-worker
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-node-v0.0.0/polka-storage-node-macos-arm64 -O polka-storage-node
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-provider-server-v0.1.0/polka-storage-provider-server-macos-arm64 -O polka-storage-provider-server
wget https://github.com/eigerco/polka-storage/releases/download/polka-storage-provider-client-v0.1.0/polka-storage-provider-client-macos-arm64 -O polka-storage-provider-client
wget https://github.com/eigerco/polka-storage/releases/download/storagext-cli-v0.1.0/storagext-cli-macos-arm64 -O storagext-cli
wget https://github.com/eigerco/polka-storage/releases/download/mater-cli-v0.1.0/mater-cli-macos-arm64 -O mater-cli
wget https://github.com/paritytech/zombienet/releases/download/v1.3.106/zombienet-macos-arm64 -O zombienet
  1. Setup permissions & de-quarantine:
chmod +x zombienet polka-storage-node polka-storage-provider-server polka-storage-provider-client storagext-cli mater-cli polkadot polkadot-prepare-worker polkadot-execute-worker
xattr -d com.apple.quarantine zombienet polka-storage-node polka-storage-provider-server polka-storage-provider-client storagext-cli mater-cli polkadot polkadot-prepare-worker polkadot-execute-worker
If, when running the xattr command, it outputs No such attr: com.apple.quarantine, there's nothing to worry about. It means the downloaded binaries were not quarantined.
  1. Run zombienet:
export PATH=$(pwd):$PATH

wget https://s3.eu-central-1.amazonaws.com/polka-storage/polka-storage-testnet.toml
zombienet -p native spawn polka-storage-testnet.toml

The parachain is also accessible using the Polkadot.js Apps interface by clicking on this link: https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:42069#/explorer

Polkadot/Subtrate Portal

Or interact with the chain via storagext-cli, for example:

storagext-cli --sr25519-key "//Alice" storage-provider register Alice

Kubernetes

Docker Images were only published on x86_64 platforms! They won't work on Kubernetes on MacOS.

Prerequisites

Start up the Kubernetes cluster

Using minikube, start the cluster with the following command:

minikube start

More information about minikube is available on its Getting Started page.

Running the Parachain

  1. Create a local-kube-testnet.toml file with the following content:
[settings]
image_pull_policy = "IfNotPresent"

[relaychain]
chain = "rococo-local"
default_args = ["--detailed-log-output", "-lparachain=debug,xcm=trace,runtime=trace"]
default_image = "docker.io/parity/polkadot:stable2407-1"

[[relaychain.nodes]]
name = "alice"
validator = true

[[relaychain.nodes]]
name = "bob"
validator = true

[[parachains]]
cumulus_based = true

# We need to use a Parachain of an existing System Chain (https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/runtime/rococo/src/xcm_config.rs).
# The reason: being able to get native DOTs from Relay Chain to Parachain via XCM Teleport.
# We'll have a proper Parachain ID in the *future*, but for now, let's stick to 1000 (which is AssetHub and trusted).
id = 1000

# Run Charlie as parachain collator
[[parachains.collators]]
args = ["--detailed-log-output", "-lparachain=debug,xcm=trace,runtime=trace"]
command = "polka-storage-node"
image = "ghcr.io/eigerco/polka-storage-node:0.0.0"
name = "charlie"
rpc_port = 42069
validator = true

For details on this configuration, refer to the Zombienet Configuration chapter.

  1. Run the Parachain, and spawn the zombienet testnet in the Kubernetes cluster:
zombienet -p kubernetes spawn local-kube-testnet.toml
Click here to show the example output.
│ /ip4/10.1.0.16/tcp/30333/ws/p2p/12D3KooWPKzmmE2uYgF3z13xjpbFTp63g9dZFag8pG6MgnpSLF4S                                   │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

         Warn: Tracing collator service doesn't exist
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                       Network launched 🚀🚀                                                       │
├──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Namespace                    │ zombie-1cecb9b5e0f9a14208f2fbefd9384490                                                            │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Provider                     │ kubernetes                                                                                         │
├──────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┤
│                                                         Node Information                                                          │
├──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Name                         │ alice                                                                                              │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Direct Link                  │ https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A34341#/explorer                           │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Prometheus Link              │ http://127.0.0.1:35537/metrics                                                                     │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Log Cmd                      │ kubectl logs -f alice -c alice -n zombie-1cecb9b5e0f9a14208f2fbefd9384490                          │
├──────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┤
│                                                         Node Information                                                          │
├──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Name                         │ bob                                                                                                │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Direct Link                  │ https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A44459#/explorer                           │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Prometheus Link              │ http://127.0.0.1:43841/metrics                                                                     │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Log Cmd                      │ kubectl logs -f bob -c bob -n zombie-1cecb9b5e0f9a14208f2fbefd9384490                              │
├──────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┤
│                                                         Node Information                                                          │
├──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Name                         │ charlie                                                                                            │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Direct Link                  │ https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A42069#/explorer                           │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Prometheus Link              │ http://127.0.0.1:42675/metrics                                                                     │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Log Cmd                      │ kubectl logs -f charlie -c charlie -n zombie-1cecb9b5e0f9a14208f2fbefd9384490                      │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Parachain ID                 │ 1000                                                                                               │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ChainSpec Path               │ /tmp/zombie-1cecb9b5e0f9a14208f2fbefd9384490_-29755-WOCdKtq9zPGA/1000-rococo-local.json            │
└──────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┘

Verifying the Setup

Check if all zombienet pods were started successfully:

kubectl get pods --all-namespaces
Click here to show the example output.
...
zombie-01b7920d650c18d3d78f75fd8b0978af   alice                              1/1     Running     0               77s
zombie-01b7920d650c18d3d78f75fd8b0978af   bob                                1/1     Running     0               62s
zombie-01b7920d650c18d3d78f75fd8b0978af   charlie                            1/1     Running     0               49s
zombie-01b7920d650c18d3d78f75fd8b0978af   fileserver                         1/1     Running     0               2m28s
zombie-01b7920d650c18d3d78f75fd8b0978af   temp                               0/1     Completed   0               2m25s
zombie-01b7920d650c18d3d78f75fd8b0978af   temp-1                             0/1     Completed   0               2m25s
zombie-01b7920d650c18d3d78f75fd8b0978af   temp-2                             0/1     Completed   0               2m15s
zombie-01b7920d650c18d3d78f75fd8b0978af   temp-3                             0/1     Completed   0               2m1s
zombie-01b7920d650c18d3d78f75fd8b0978af   temp-4                             0/1     Completed   0               114s
zombie-01b7920d650c18d3d78f75fd8b0978af   temp-5                             0/1     Completed   0               91s
zombie-01b7920d650c18d3d78f75fd8b0978af   temp-collator                      0/1     Completed   0               104s

Accessing the Parachain

The parachain is available through the Polkadot.js Apps interface by clicking on this link:

https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A42069#/explorer

This link will automatically connect to Charlie's node running on a local machine at port 42069. The port is configured in local-kube-testnet.toml under rpc_port for Charlie's node.

Checking the logs

At the end of the zombienet output you should see a table like so:

┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                         Node Information                                                          │
├──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Name                         │ charlie                                                                                            │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Direct Link                  │ https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:42069#/explorer                                   │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Prometheus Link              │ http://127.0.0.1:44955/metrics                                                                     │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Log Cmd                      │ tail -f                                                                                            │
│                              │ /tmp/nix-shell.gQQj4Y/zombie-bcb786e1748ff0a6becd28289e1f70b9_-677866-G8ea9Qqs65DB/charlie.log     │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Parachain ID                 │ 1000                                                                                               │
├──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ChainSpec Path               │ /tmp/nix-shell.gQQj4Y/zombie-bcb786e1748ff0a6becd28289e1f70b9_-677866-G8ea9Qqs65DB/1000-rococo-lo… │
└──────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┘

We strongly recommend you check the logs for the collator (in this case Charlie), using a text editor of your choice.

If the Log Cmd is shortened with ..., try to search for the folder using the zombienet namespace, available at the top of the table. For example:

$ ls /tmp | grep zombie-bcb786e1748ff0a6becd28289e1f70b9

After finding the charlie.log file, you should grep for errors relating to the extrinsic you just ran, for example, if you ran a market extrinsic, like add-balance, you would run:

$ grep "ERROR.*runtime::market" charlie.log

Or, more generally:

$ grep "<LOG_LEVEL>.*runtime::<EXTRINSIC_PALLET>" charlie.log

Where LOG_LEVEL is one of:

  • DEBUG
  • INFO
  • WARN
  • ERROR

And the extrinsic pallet, is one of:

  • storage_provider
  • market
  • proofs

Tip: if you are running into the AllProposalsInvalid error, try searching for insane deal in the logs, you should find the cause faster!

For example:

$ grep "insane deal" -B 1 charlie.log
2024-11-14 13:24:24.019 ERROR tokio-runtime-worker runtime::market: [Parachain] deal duration too short: 100 < 288000
2024-11-14 13:24:24.019 ERROR tokio-runtime-worker runtime::market: [Parachain] insane deal: idx 0, error: ProposalError::DealDurationOutOfBounds

Getting funds

This document covers getting funds into an account that has been generated externally.

Setting up your account

In this guide we will be covering getting funds into a externally generated account. The recommended way to generate an account is by using the polkadot.js wallet extension.

Please make sure to follow the instructions on how to generate a new account if you have not done so already. You can read more about creating a Polkadot account using the extension in the following link

Or you can watch the following video:

Dripping funds using the storagext CLI

Make sure to run the local testnet, you can find how to do so in the local testnet guide.

Once the local testnet is up and running we can drip funds into the account.

Funds can be dripped using a single command:

storagext-cli faucet drip <ACCOUNT>

Where the <ACCOUNT> is the SS58 address generated in the previous steps.

For more information about the faucet pallet and the storagext faucet subcommand check out their respective documenation.

Faucet pallet docs link

storagext faucet subcommand docs link

Getting funds through the Sudo pallet

Make sure to run the local testnet, you can find how to do so in the local testnet guide. Once the local testnet is up and running navigate to the polkadot-js web app interface by going to the default polkadot.js web interface URL.

If you have changed the ws_port value in the zombienet configuration — local-testnet.toml, this URL is different and you should change the port accordingly.

Under the developer tab, navigate to Sudo.

sudo selection

Once you are in Sudo you should select balances from the submit dropdown.

balance selection

Then, on the box to the right, select forceSetBalance from the dropdown.

force selection

Setting the Id field to your generated address and the newFree field to the amount in plancks, as shown in the image below. If your polkadot.js extension is injected into the polkadot.js web interface it will recognize the injection and you can select the desired account.

Note that the forceSetBalance extrinsic does NOT add tokens to an account but rather sets the balance to the given amount.

balance forceSetBalance

Sign and submit your transaction, the caller will automatically be set to Alice, a dev account.

sign and submit

After the block has been finalized, the balance will show up in the generated account under the accounts tab and you are ready to start using the polka-storage chain with your own account.

account balance

Launching the Storage Provider

This guide assumes you have read the Building and the Local Testnet - Polka Storage guides and have a running testnet to connect to.

Setting up the Storage Provider doesn't have a lot of science, but isn't automatic either! In this guide, we'll cover how to get up and running with the Storage Provider.

Generating the PoRep Parameters

First and foremost, to allow the Storage Provider to generate PoRep proofs, we need to first generate their parameters, we do that with the following command:

$ polka-storage-provider-client proofs porep-params
Generating params for 2KiB sectors... It can take a couple of minutes ⌛
Generated parameters:
/home/user/polka-storage/2KiB.porep.params
/home/user/polka-storage/2KiB.porep.vk
/home/user/polka-storage/2KiB.porep.vk.scale

As advertised, the command has generated the following files:

  • 2KiB.porep.params — The PoRep parameters
  • 2KiB.porep.vk — The verifying key
  • 2KiB.porep.vk.scale — The verifying key, encoded in SCALE format

Registering the Storage Provider

If you encounter errors while running extrinsics, check the parachain logs. Refer to the Checking the logs section under the Local Testnet - Polka Storage Provider chapter.

Logically, if you want to participate in the network, you need to register. To do so, you need to run one of the following commands:

storagext-cli --sr25519-key <KEY> storage-provider register "<peer_id>"
storagext-cli --ed25519-key <KEY> storage-provider register "<peer_id>"
storagext-cli --ecdsa-key <KEY> storage-provider register "<peer_id>"

Where <KEY> has been replaced accordingly to its key type. <peer_id> can be anything as it is currently used as a placeholder. For example:

storagext-cli --sr25519-key "//Charlie" storage-provider register "placeholder"

After registering, there is one more thing to be done, to be able to verify proofs in local testnet. We need to set the global verifying key in the network, so it's compatible with the proving parameters:

storagext-cli --sr25519-key "//Charlie" proofs set-porep-verifying-key @2KiB.porep.vk.scale

Additionally, you will need to add some balance to your Polka Storage escrow account, like so:

$ storagext-cli --sr25519-key "//Charlie" market add-balance 12500000000
[0x809d…8f10] Balance Added: { account: 5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y, amount: 12500000000 }

You can use other balance values! There's a minimum though — 1_000_000_000 (without the _).

And you're ready!

Launching the server 🚀

Similarly to the previous steps, here too you'll need to run a command. The following is the minimal command:

polka-storage-provider-server \
  --seal-proof 2KiB \
  --post-proof 2KiB \
  --porep-parameters <POREP-PARAMS> \
  --X-key <KEY>

Where --X-key <KEY> matches the key type you used to register yourself with the network, in the previous step. For example:

polka-storage-provider-server \
  --seal-proof 2KiB \
  --post-proof 2KiB \
  --porep-parameters "2KiB.porep.params" \
  --sr25519-key "//Charlie"

Note that currently, --seal-proof and --post-proof only support 2KiB.

<POREP-PARAMS> is the resulting *.porep.params file from the first steps, in this case, 2KiB.porep.params.

When ran like this, the server will assume a random directory for the database and the storage, however, you can change that through the --database-directory and --storage-directory, respectively, if the directory does not exist, it will be created.

You can also change the parachain node address it connects to, by default, the server will try to connect to ws://127.0.0.1:42069, but you can change this using the --node-url flag.

Finally, you can change the listening addresses for the RPC and HTTP services, they default to 127.0.0.1:8000 and 127.0.0.1:8001 respectively, and can be changed using the flags --rpc-listen-address and --upload-listen-address.

For more information on the available flags, refer to the server chapter.

Proving a file

To store the file according to the protocol, Storage Provider has to assign it to a sector, pre-commit and then prove it! That's a lot of steps, but this is handled automatically, behind the scenes by the pipeline, then the file is eventually published.

Here are excerpts from Storage Provider Node after executing the store a file scenario:

2024-11-11T12:34:21.430693Z  INFO start_rpc_server: polka_storage_provider_server::rpc: Starting RPC server at 127.0.0.1:8000
2024-11-11T12:34:21.430870Z  INFO start_upload_server: polka_storage_provider_server::storage: Starting HTTP storage server at: 127.0.0.1:8001
2024-11-11T12:34:21.431984Z  INFO start_rpc_server: polka_storage_provider_server::rpc: RPC server started
2024-11-11T12:35:07.883255Z  INFO request{method=PUT matched_path="/upload/:cid" request_id=e71d7e49-0272-435e-899e-a12a5d639268}:upload: polka_storage_provider_server::storage: CAR file created final_content_path="/var/folders/51/ch08ltd95bxcwpvskd28wr5h0000gp/T/Xvm5m7j/deals_storage/car/bafkreihoxd7eg2domoh2fxqae35t7ihbonyzcdzh5baevxzrzkaakevuvy.car"
2024-11-11T12:37:29.258216Z  INFO add_piece: polka_storage_provider_server::pipeline: Adding a piece...
2024-11-11T12:37:29.258785Z  INFO polka_storage_provider_server::pipeline: Preparing piece...
2024-11-11T12:37:29.259375Z  INFO polka_storage_provider_server::pipeline: Adding piece...
2024-11-11T12:37:29.261621Z  INFO add_piece: polka_storage_provider_server::pipeline: Finished adding a piece
2024-11-11T12:37:29.261979Z  INFO polka_storage_provider_server::pipeline: Add Piece for piece Commitment { commitment: [...], kind: Piece }, deal id 0, finished successfully.
2024-11-11T12:37:29.262023Z  INFO precommit: polka_storage_provider_server::pipeline: Starting pre-commit
2024-11-11T12:37:29.262258Z  INFO precommit: polka_storage_provider_server::pipeline: Padded sector, commencing pre-commit and getting last finalized block
2024-11-11T12:37:29.263185Z  INFO precommit: polka_storage_provider_server::pipeline: Current block: 35
2024-11-11T12:37:29.263852Z  INFO filecoin_proofs::api::seal: seal_pre_commit_phase1:start: SectorId(1)
2024-11-11T12:37:29.275251Z  INFO storage_proofs_porep::stacked::vanilla::proof: replicate_phase1
2024-11-11T12:37:29.275814Z  INFO storage_proofs_porep::stacked::vanilla::graph: using parent_cache[64 / 64]
2024-11-11T12:37:29.276105Z  INFO storage_proofs_porep::stacked::vanilla::cache: parent cache: opening /var/tmp/filecoin-parents/v28-sdr-parent-3f0eef38bb48af1f48ad65e14eb85b4ebfc167cec18cd81764f6d998836c9899.cache, verify enabled: false
2024-11-11T12:37:29.277633Z  INFO storage_proofs_porep::stacked::vanilla::proof: single core replication
2024-11-11T12:37:29.277644Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single: generate labels
2024-11-11T12:37:29.277681Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single: generating layer: 1
2024-11-11T12:37:29.277915Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single:   storing labels on disk
2024-11-11T12:37:29.278316Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single:   generated layer 1 store with id layer-1
2024-11-11T12:37:29.278328Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single:   setting exp parents
2024-11-11T12:37:29.278336Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single: generating layer: 2
2024-11-11T12:37:29.278418Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single:   storing labels on disk
2024-11-11T12:37:29.278735Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single:   generated layer 2 store with id layer-2
2024-11-11T12:37:29.278745Z  INFO storage_proofs_porep::stacked::vanilla::create_label::single:   setting exp parents
2024-11-11T12:37:29.278761Z  INFO filecoin_proofs::api::seal: seal_pre_commit_phase1:finish: SectorId(1)
[...]
2024-11-11T12:37:29.313831Z  INFO storage_proofs_core::data: dropping data /var/folders/51/ch08ltd95bxcwpvskd28wr5h0000gp/T/Xvm5m7j/deals_storage/sealed/1
2024-11-11T12:37:29.314137Z  INFO filecoin_proofs::api::seal: seal_pre_commit_phase2:finish
2024-11-11T12:37:29.314165Z  INFO precommit: polka_storage_provider_server::pipeline: Created sector's replica: PreCommitOutput { }
[...]
2024-11-11T12:37:57.324204Z  INFO precommit: polka_storage_provider_server::pipeline: Successfully pre-commited sectors on-chain: [SectorsPreCommitted { block: 39, [...] }]
2024-11-11T12:37:57.324292Z  INFO polka_storage_provider_server::pipeline: Precommit for sector 1 finished successfully.
2024-11-11T12:37:57.324345Z  INFO prove_commit: polka_storage_provider_server::pipeline: Starting prove commit
2024-11-11T12:37:57.325705Z  INFO prove_commit: polka_storage_provider_server::pipeline: Wait for block 49 to get randomness
2024-11-11T12:39:05.518784Z  INFO storage_proofs_porep::stacked::vanilla::proof: generating interactive vanilla proofs
2024-11-11T12:39:05.529259Z  INFO bellperson::groth16::prover::native: Bellperson 0.26.0 is being used!
2024-11-11T12:39:06.634632Z  INFO bellperson::groth16::prover::native: synthesis time: 1.105318708s
2024-11-11T12:39:06.634659Z  INFO bellperson::groth16::prover::native: starting proof timer
[...]
2024-11-11T12:39:23.728566Z  INFO bellperson::groth16::prover::native: prover time: 17.094277959s
2024-11-11T12:39:23.737186Z  INFO prove_commit: polka_storage_provider_server::pipeline: Proven sector: 1

After that, Storage Provider needs to continously submit a PoSt to prove that they are still storing the file. If they do not, they'll be slashed. We have not yet integrated the logic for PoSt verification with Storage Provider node, but the logic on-chain has been implemented.

Storing a file

Before reading this guide, please follow the local testnet guide and storage provider guide. You should have a working testnet and a Storage Provider running!

Storage Client

The Polkadot logo

Alice is a Storage User and wants to store an image of her lovely Polkadot logo polkadot.svg in the Polka Storage parachain.

Alice knows that she needs to prepare an image for storage and get its CID. To do so, she first converts it into a CARv2 archive and gets the piece cid.

$ mater-cli convert -q --overwrite polkadot.svg polkadot.car
bafkreihoxd7eg2domoh2fxqae35t7ihbonyzcdzh5baevxzrzkaakevuvy
$ polka-storage-provider-client proofs commp polkadot.car
{
  "cid": "baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq",
  "size": 2048
}

Proposing a deal

Afterwards, it's time to propose a deal, currently — i.e. while the network isn't live — any deals will be accepted by Charlie (the Storage Provider).

Alice fills out the deal form according to a JSON template (polka-logo-deal.json):

{
  "piece_cid": "baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq",
  "piece_size": 2048,
  "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
  "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
  "label": "",
  "start_block": 200,
  "end_block": 250,
  "storage_price_per_block": 500,
  "provider_collateral": 1000100200,
  "state": "Published"
}
  • piece_cid — is the cid field from the previous step, where she calculated the piece commitment. It uniquely identifies the piece.
  • piece_size — is the size field from the previous step, where she calculated the piece commitment. It is the size of the processed piece, not the original file!
  • client — is the client's (i.e. the reader's) public key, encoded in bs58 format. For more information on how to generate your own keypair, read the Polka Storage Provider CLI/client/wallet.
  • provider — is the storage provider's public key, encoded in bs58 format. If you don't know your storage provider's public key, you can query it using polka-storage-provider-client's info command.
  • label — is an arbitrary string to be associated with the deal.
  • start_block — is the deal's start block, it MUST be positive and lower than end_block.
  • end_block — is the deal's end block, it must be positive and larger than start_block.
  • storage_price_per_block — the storage price over the duration of a single block — e.g. if your deal is 20 blocks long, it will cost 20 * storage_price_per_block in total.
  • provider_collateral — the price to pay by the storage provider if they fail to uphold the deal.
  • state — the deal state, only Published is accepted.

The start_block and end_block fields may need to be changed depending on the current block you are on. The values 200 and 250 are solely for demonstration purposes and we encourage you to try other values!

Variables subject to change depending on the chains state

start_block - The start block must be after the current block. Check the polka storage node logs or use the polkadot.js UI for the current block and adjust the start_block value accordingly.

end_block - The end block must be between 50 and 1800 blocks after start_block.

See the Storage Provider Constants and the Market Constant for more information about the configuration variables

When the deal is ready, she proposes it:

$ polka-storage-provider-client propose-deal --rpc-server-url "http://localhost:8000" "@polka-logo-deal.json"
bagaaierab543mpropvi5mnmtptytnnlbr2j7vea7lowcugrqt7epanybw7ta

The storage provider replied with a CID — the CID of the deal Alice just sent — she needs to keep this CID for the next steps!

Once the server has replied with the CID, she's ready to upload the file. This can be done with just any tool that can upload a file over HTTP. The server supports both multipart forms and PUT.

$ curl --upload-file "polkadot.svg" "http://localhost:8001/upload/bagaaierab543mpropvi5mnmtptytnnlbr2j7vea7lowcugrqt7epanybw7ta"
baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq

Publishing the deal

Before Alice publishes a deal, she must ensure that she has the necessary funds available in the market escrow, to be able to pay for the deal:

$ storagext-cli --sr25519-key "//Alice" market add-balance 25000000000
[0x6489…a2c0] Balance Added: { account: 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY, amount: 25000000000 }

Finally, she can publish the deal by submitting her deal proposal along with your signature to the storage provider.

To sign her deal proposal she runs:

$ polka-storage-provider-client sign-deal --sr25519-key "//Alice" @polka-logo-deal.json
{
  "deal_proposal": {
    "piece_cid": "baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq",
    "piece_size": 2048,
    "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "label": "",
    "start_block": 200,
    "end_block": 250,
    "storage_price_per_block": 500,
    "provider_collateral": 1000100200,
    "state": "Published"
  },
  "client_signature": {
    "Sr25519": "7eb8597441711984b7352bd4a118eac57341296724c20d98a76ff8d01ee64038f6a9881e492a98c3a190e7b600a8313d72e9f0edacb3e6df6b0b4507dabb9580"
  }
}

Hint: you can write the following command to just get the file:

polka-storage-provider-client sign-deal --sr25519-key "//Alice" @polka-logo-deal.json > signed-logo-deal.json

All that's left is to publish the deal:

$ polka-storage-provider-client publish-deal --rpc-server-url "http://localhost:8000" @signed-logo-deal.json
Successfully published deal of id: 0

On Alice's side, that's it!

Polka Storage Provider

The Polka Storage Provider is comprised of two different binaries, one for the server and one for the client.

The server binary is fairly straight forward, providing a single binary to run all components necessary for running a storage provider, such as the RPC and HTTP APIs, as well as the proving pipeline. The client binary provides tools for the server's administration, client's dealings and some demos.

Do not worry, this chapter will go over all that!

Polka Storage Provider — Server

This chapter covers the available CLI options for the Polka Storage Provider server.

--upload-listen-address

The storage server's endpoint address — i.e. where the client will upload their files to.

It takes in an IP address along with a port in the format: <ip>:<port>. Defaults to 127.0.0.1:8001.

--rpc-listen-address

The RPC server endpoint's address — i.e. where you will submit your deals to.

It takes in an IP address along with a port in the format: <ip>:<port>. Defaults to 127.0.0.1:8000.

--node-url

The target parachain node's address — i.e. the parachain node the storage provider will submit deals to, etc.

It takes in an URL, it supports both HTTP and WebSockets and their secure variants. Defaults to ws://127.0.0.1:42069.

--sr25519-key

Sr25519 keypair, encoded as hex, BIP-39 or a dev phrase like //Alice.

See sp_core::crypto::Pair::from_string_with_seed for more information.

If this --sr25519-key is not used, either --ecdsa-key or --ed25519-key MUST be used.

--ecdsa-key

ECDSA keypair, encoded as hex, BIP-39 or a dev phrase like //Alice.

See sp_core::crypto::Pair::from_string_with_seed for more information.

If this --ecdsa-key is not used, either --sr25519-key or --ed25519-key MUST be used.

--ed25519-key

Ed25519 keypair, encoded as hex, BIP-39 or a dev phrase like //Alice.

See sp_core::crypto::Pair::from_string_with_seed for more information.

If this --ed25519-key is not used, either --ecdsa-key or --sr25519-key MUST be used.

--database-directory

The RocksDB storage directory, where deal information will be kept.

It takes in a valid folder path, if the directory does not exist, it will be created along with all intermediate paths. Defaults to a pseudo-random temporary directory — /tmp/<random string>/deals_database.

--storage-directory

The piece storage directory, where pieces will be kept.

It takes in a valid folder path, if the directory does not exist, it will be created along with all intermediate paths. Defaults to a pseudo-random temporary directory — /tmp/<random string>/....

Storage directories for the pieces, unsealed and sealed sectors will be created under it.

--seal-proof

The kind of replication proof. Currently, only StackedDRGWindow2KiBV1P1 is supported to which it defaults.

--post-proof

The kind of storage proof. Currently, only StackedDRGWindow2KiBV1P1 is supported to which it defaults.

client

We cover the commands provided by the polka-storage-provider-client CLI tool.

wallet

The wallet command is a thin wrapper over the subkey utility provided by Polkadot.

More information available on the wallet page.

info

The info command retrieves information about the storage provider it connects to.

$ polka-storage-provider-client info --rpc-server-url "http://127.0.0.1:8000"
{
  "start_time": "2024-11-06T11:29:06.058967136Z",
  "address": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
  "seal_proof": "StackedDRG2KiBV1P1",
  "post_proof": "StackedDRGWindow2KiBV1P1"
}

propose-deal

The propose-deal command sends an unsigned deal to the storage provider, if the storage provider accepts the deal, a CID will be returned, that CID can then be used to upload a file to the storage provider — for details on this process, refer to the File Upload chapter.

For the current MVP, the storage provider accepts all valid deals!

$ DEAL_TO_PROPOSE='{
    "piece_cid": "baga6ea4seaqj527iqfb2kqhy3tmpydzroiigyaie6g3txai2kc3ooyl7kgpeipi",
    "piece_size": 2048,
    "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "label": "",
    "start_block": 200,
    "end_block": 250,
    "storage_price_per_block": 500,
    "provider_collateral": 1250,
    "state": "Published"
}'
# when we omit the `--rpc-server-address` it defaults to "http://127.0.0.1:8000"
$ polka-storage-provider-client propose-deal "$DEAL_TO_PROPOSE"
bagaaieradsfmawozrmgjwxosarexpg7w7ytoe7xw2c63hv6svdc5hpucqo3a

sign-deal

The sign-deal commands takes a deal like the one passed to propose-deal and signs it using the passed key, the returned deal can then be used with publish-deal to send a deal for publishing. This command does not call out to the network.

$ DEAL_TO_SIGN='{
    "piece_cid": "baga6ea4seaqj527iqfb2kqhy3tmpydzroiigyaie6g3txai2kc3ooyl7kgpeipi",
    "piece_size": 2048,
    "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "label": "",
    "start_block": 200,
    "end_block": 250,
    "storage_price_per_block": 500,
    "provider_collateral": 1250,
    "state": "Published"
}'
$ polka-storage-provider-client sign-deal --sr25519-key "//Charlie" "$DEAL_TO_SIGN"
{
  "deal_proposal": {
    "piece_cid": "baga6ea4seaqj527iqfb2kqhy3tmpydzroiigyaie6g3txai2kc3ooyl7kgpeipi",
    "piece_size": 2048,
    "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "label": "",
    "start_block": 200,
    "end_block": 250,
    "storage_price_per_block": 500,
    "provider_collateral": 1250,
    "state": "Published"
  },
  "client_signature": {
    "Sr25519": "32809cd5b53fa3c2e977f77c4e2189dee230b8773946cf94a704f8af19c578289c11ad256b56146195cfc5d7bb8f670003e4575e133f799f19696495046ed58f"
  }
}

publish-deal

The publish-deal command effectively publishes the deal, its input is a deal signed using sign-deal, and the output is the on-chain deal ID.

$ SIGNED_DEAL='{
  "deal_proposal": {
    "piece_cid": "baga6ea4seaqj527iqfb2kqhy3tmpydzroiigyaie6g3txai2kc3ooyl7kgpeipi",
    "piece_size": 2048,
    "client": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "provider": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "label": "",
    "start_block": 200,
    "end_block": 250,
    "storage_price_per_block": 500,
    "provider_collateral": 1250,
    "state": "Published"
  },
  "client_signature": {
    "Sr25519": "32809cd5b53fa3c2e977f77c4e2189dee230b8773946cf94a704f8af19c578289c11ad256b56146195cfc5d7bb8f670003e4575e133f799f19696495046ed58f"
  }
}'
$ polka-storage-provider-client publish-deal "$SIGNED_DEAL"
0

The wallet command

The wallet command is a re-export of the Substrate CLI. The detailed documentation is available under:

Subcommands

The following commands are available with the wallet subcommand:

CommandDescription
generate-node-keyGenerate a random node key, write it to a file or stdout and write the corresponding peer-id to stderr
generateGenerate a random account
inspectGets a public key and an SS58 address from the provided Secret URI
inspect-node-keyLoad a node key from a file or stdin and print the corresponding peer-id
signSign a message with a given (secret) key
vanityGenerate a seed that provides a vanity address
verifyVerify a signature for a message, provided on STDIN, with a given (public or secret) key
helpPrint this message or the help of the given subcommand(s)

Examples

Keys shown on this page are, by default, not secure! Do not use them in production!

Generate a new random key to interact with the Polka Storage parachain:

> polka-storage-provider-client wallet generate
Secret phrase:       offer payment boost boy manage car asset lock cousin mountain vehicle setup
  Network ID:        substrate
  Secret seed:       0xfe36ee692552b0ce54de06ce4f5cc152fe2fa808cb40f58c81168bc1237208bb
  Public key (hex):  0x3ae6bdc05a6657cea011084d32b9970891be5d02b2101bbad0ca95d287f0226e
  Account ID:        0x3ae6bdc05a6657cea011084d32b9970891be5d02b2101bbad0ca95d287f0226e
  Public key (SS58): 5DPwBLBRGunws9T2aF59cht37HeBg9aSTAc6Fh2aFBJPSsr6
  SS58 Address:      5DPwBLBRGunws9T2aF59cht37HeBg9aSTAc6Fh2aFBJPSsr6

The password may be added interactively, using --password-interactive flag:

> polka-storage-provider-client wallet generate --password-interactive
Key password: <top secret hidden password>
Secret phrase:       comfort distance rack number assist nasty young universe lamp advice neglect ladder
  Network ID:        substrate
  Secret seed:       0x4243f3f1d78beb5c0408bbaeae58845881b638060380437967482be2d4d42bce
  Public key (hex):  0x3acb66c0313d0e8ef896bc2317545582c1f0a928f402bcbe4cdf6f37489ddb16
  Account ID:        0x3acb66c0313d0e8ef896bc2317545582c1f0a928f402bcbe4cdf6f37489ddb16
  Public key (SS58): 5DPo4H1oPAQwReNVMi9XckSkvW4me1kJoageggJSMDF2EzjZ
  SS58 Address:      5DPo4H1oPAQwReNVMi9XckSkvW4me1kJoageggJSMDF2EzjZ

Or it can be passed directly, beforehand:

> polka-storage-provider-client wallet generate --password <top secret password>
Secret phrase:       cactus art crime burden hope also thought asset lake only cheese obtain
  Network ID:        substrate
  Secret seed:       0xb69c2d238fa7641f0d69911ca8f107f1b97a51cfc71e8a06e0ec9c7329d69ff7
  Public key (hex):  0xb60a716e488bcb2a54ef1b1cf8874569d2d927cc830ae0ae1cc2612fac27f55d
  Account ID:        0xb60a716e488bcb2a54ef1b1cf8874569d2d927cc830ae0ae1cc2612fac27f55d
  Public key (SS58): 5GBPg51VZG8PobmkLNSn9vDkNvoBXV5vCGhbetifgxwjPKAg
  SS58 Address:      5GBPg51VZG8PobmkLNSn9vDkNvoBXV5vCGhbetifgxwjPKAg

proofs

The following subcommands are contained under proofs.

These are advanced commands and only useful for demo purposes. This functionality is covered in the server by the pipeline.

NameDescription
commpCalculate a piece commitment (CommP) for the provided data stored at the a given path.
porep-paramsGenerates PoRep verifying key and proving parameters for zk-SNARK workflows (prove commit)
post-paramsGenerates PoSt verifying key and proving parameters for zk-SNARK workflows (submit windowed PoSt)
porepGenerates PoRep for a piece file. Takes a piece file (in a CARv2 archive, unpadded), puts it into a sector (temp file), seals and proves it
postCreates a PoSt for a single sector

commp

Produces a CommP out of the CARv2 archive and calculates piece_size that will be accepted by the network in a deal. If the file at the path is not a CARv2 archive, it fails. To create a CARv2 archive, you can use mater-cli convert command.

Example

$ mater-cli convert polkadot.svg
Converted polkadot.svg and saved the CARv2 file at polkadot.car with a CID of bafkreihoxd7eg2domoh2fxqae35t7ihbonyzcdzh5baevxzrzkaakevuvy
$ polka-storage-provider-client proofs commp polkadot.car
{
    "cid": "baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq",
    "size": 2048
}

porep-params

Generates a PoRep parameters which consist of Proving Params (*.porep.params file) and Verifying Key (*.porep.vk, *.porep.vk.scale). Proving Parameters are used by the Storage Provider to generate a PoRep and the corresponding Verifying Key is used to verify proofs on chain by pallet-proofs and pallet-storage-provider.

Example

$ polka-storage-provider-client proofs porep-params
Generating params for 2KiB sectors... It can take a couple of minutes ⌛
Generated parameters:
[...]/polka-storage/2KiB.porep.params
[...]/polka-storage/2KiB.porep.vk
[...]/polka-storage/2KiB.porep.vk.scale

post-params

Generates a PoSt parameters which consist of Proving Params (*.post.params file) and Verifying Key (*.post.vk, *.post.vk.scale). Proving Parameters are used by the Storage Provider to generate a PoSt and the corresponding Verifying Key is used to verify proofs on chain by pallet-proofs and pallet-storage-provider.

Example

$ polka-storage-provider-client proofs post-params
Generating PoSt params for 2KiB sectors... It can take a few secs ⌛
Generated parameters:
[...]/polka-storage/2KiB.post.params
[...]/polka-storage/2KiB.post.vk
[...]/polka-storage/2KiB.post.vk.scale

porep

Generates a 2KiB sector-size PoRep proof for an input file and its piece commitment. Creates the sector containing only 1 piece, seals it by creating a replica and then creates a proof for it.

This is a demo command, showcasing the ability to generate a PoRep given the proving parameters so it can later be used to verify proof on-chain. It uses hardcoded values, which normally would be sourced from the chain i.e:

#![allow(unused)]
fn main() {
let sector_id = 77;
let ticket = [12u8; 32];
let seed = [13u8; 32];
}
polka-storage-provider-client proofs porep \
    --sr25519-key|--ecdsa-key|--ed25519-key <KEY> \
    --cache-directory <CACHE_DIRECTORY> \
    --proofs-parameters-path <PROVING_PARAMS_FILE> \
    <INPUT_FILE> <INPUT_FILE_PIECE_CID>

Example

$ mater-cli convert polkadot.svg
Converted polkadot.svg and saved the CARv2 file at polkadot.car with a CID of bafkreihoxd7eg2domoh2fxqae35t7ihbonyzcdzh5baevxzrzkaakevuvy
$ polka-storage-provider-client proofs commp polkadot.car
{
    "cid": "baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq",
    "size": 2048
}
$ polka-storage-provider-client proofs porep-params
Generating params for 2KiB sectors... It can take a couple of minutes ⌛
Generated parameters:
[...]/polka-storage/2KiB.porep.params
[...]/polka-storage/2KiB.porep.vk
[...]/polka-storage/2KiB.porep.vk.scale
$ mkdir -p /tmp/psp-cache
$ polka-storage-provider-client proofs porep --sr25519-key "//Alice" --cache-directory /tmp/psp-cache --proof-parameters-path 2KiB.porep.params polkadot.car baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq
Creating sector...
Precommitting...
2024-11-18T10:48:29.858550Z  INFO filecoin_proofs::api::seal: seal_pre_commit_phase1:start: SectorId(77)
2024-11-18T10:48:29.863782Z  INFO storage_proofs_porep::stacked::vanilla::proof: replicate_phase1
2024-11-18T10:48:29.864120Z  INFO storage_proofs_porep::stacked::vanilla::graph: using parent_cache[64 / 64]
[...]
CommD: Cid(baga6ea4seaqabpfwrqjcwrb4pxmo2d3dyrgj24kt4vqqqcbjoph4flpj2e5lyoq)
CommR: Cid(bagboea4b5abcb7rgo7kuqigb2wjybggbvlmmatmki52by3wov5uwjrjwefxwzxi5)
Wrote proof to [...]/polka-storage/77.sector.proof.porep.scale

post

Generates a 2KiB sector-sized PoSt proof. To be able to create a PoSt proof, first you need to generate a PoRep proof and a replica via porep command.

This is a demo command, showcasing the ability to generate a PoSt, given the proving parameters so it can later be used to verify proof on-chain. It uses hardcoded values, which normally would be sourced from the chain i.e:

#![allow(unused)]
fn main() {
let sector_id = 77;
let randomness = [1u8; 32];
}
polka-storage-provider-client proofs post
  --sr25519-key|--ecdsa-key|--ed25519-key <KEY> \
  --proof-parameters-path <PROOF_PARAMETERS_PATH> \
  --cache-directory <CACHE_DIRECTORY> \
  <REPLICA_PATH>
  <COMM_R>

Example

$ polka-storage-provider-client proofs post-params
Generating PoSt params for 2KiB sectors... It can take a few secs ⌛
Generated parameters:
[...]/polka-storage/2KiB.post.params
[...]/polka-storage/2KiB.post.vk
[...]/polka-storage/2KiB.post.vk.scale
$ polka-storage-provider-client proofs post --sr25519-key "//Alice" --cache-directory /tmp/psp-cache --proof-parameters-path 2KiB.post.params 77.sector.sealed bagboea4b5abcb7rgo7kuqigb2wjybggbvlmmatmki52by3wov5uwjrjwefxwzxi5
Loading parameters...
2024-11-18T11:20:26.718119Z  INFO storage_proofs_core::compound_proof: vanilla_proofs:start
2024-11-18T11:20:26.750347Z  INFO storage_proofs_core::compound_proof: vanilla_proofs:finish
2024-11-18T11:20:26.750712Z  INFO storage_proofs_core::compound_proof: snark_proof:start
2024-11-18T11:20:26.750797Z  INFO bellperson::groth16::prover::native: Bellperson 0.26.0 is being used!
2024-11-18T11:20:26.771368Z  INFO bellperson::groth16::prover::native: synthesis time: 20.550334ms
2024-11-18T11:20:26.771385Z  INFO bellperson::groth16::prover::native: starting proof timer
2024-11-18T11:20:26.772676Z  INFO bellperson::gpu::locks: GPU is available for FFT!
2024-11-18T11:20:26.772687Z  INFO bellperson::gpu::locks: BELLPERSON_GPUS_PER_LOCK fallback to single lock mode
Proving...
Wrote proof to [...]/polka-storage/77.sector.proof.post.scale

Storagext CLI

Alongside the pallets, we've also developed a CLI to enable calling extrinsics without Polkadot.js.

The CLI's goal is to ease development and testing and to sidestep some limitations of the Polkadot.js visual interface.

This chapter covers how to use the storagext-cli, along with that, there are several usage examples available throughout the book.

Getting started

The storagext-cli takes two main flags — the node's RPC address and a key1, the latter is split into three kinds, and one is required for most operations (for example, if the operation being called is a signed extrinsic2):

  • Sr25519--sr25519-key or the SR25519_KEY environment variable
  • ECDSA--ecdsa-key or the ECDSA_KEY environment variable
  • Ed25519--ed25519-key or the ED25519_KEY environment variable

For example, to connect to a node at supercooldomain.com:1337 using Charlie's Sr25519 key:

storagext-cli --node-rpc "supercooldomain.com:1337" --sr25519-key "//Charlie" <commands>

Or, retrieving the same key but using the environment variable form:

SR25519_KEY="//Charlie" storagext-cli --node-rpc "supercooldomain.com:1337" <commands>

Flags

NameDescription
--node-rpcThe node's RPC address (including port), defaults to ws://127.0.0.1:42069
--sr25519-keySr25519 keypair, encoded as hex, BIP-39 or a dev phrase like //Charlie
--ecdsa-keyECDSA keypair, encoded as hex, BIP-39 or a dev phrase like //Charlie
--ed25519-keyEd25519 keypair, encoded as hex, BIP-39 or a dev phrase like //Charlie
--formatThe output format, either json or plain (case insensitive), defaults to plain
--n-retriesThe number of connection retries when trying to initially connect to the parachain, defaults to 10
--retry-intervalThe retry interval between connection retries, in milliseconds, defaults to 3000 (3 seconds)
--wait-for-finalizationWait for the inclusion of the extrinsic call in a finalized block, will wait by default

--format

The --format global flag changes how extrinsic output is done, if the output is set to plain we do not make any guarantees about the output format, as such, you should not rely on it for scripts!

If the output is set to --json all standard output from the CLI will be formatted as JSON, however, as it currently stands, we do not guarantee a stable interface — though we will make an effort to keep changes to a minimum and document them.

1

Read more about how cryptographic keys are used in Polkadot — https://wiki.polkadot.network/docs/learn-cryptography.

2

If a key is passed to the CLI, but the operation called does not require a key, the key will not be used.

--n-retries and --retry-interval

These flags help you connect under a difficult network environment, or when you're launching the node and it's still booting up, this allows you to "actively wait" for the node to come online.

--wait-for-finalization

If you want to see the result of your extrinsic call, this flag is for you. By default, storagext-cli will wait for the result of the extrinsic, to disable this behaviour use --wait-for-finalization=false.

When enabled, storagext-cli will wait until the extrinsic makes it to a finalized block and will report it's result — whether the call was successful or not.

Sub-chapters

The market command

Under the market subcommand Market related extrinsics are available. This chapter covers the provided commands and how to use them.

The storagext-cli getting started page covers the basic flags necessary to operate the CLI and should be read first.

add-balance

The add-balance adds balance to the market account of the extrinsic signer. It takes a single AMOUNT argument, the balance to add to the market account, the balance will be added to the free balance.

Parameters

NameDescriptionType
AMOUNTThe amount to be added to the market balancePositive integer

Example

Adding 1000000000 Plancks to Alice's account.

storagext-cli --sr25519-key "//Alice" market add-balance 1000000000
The 1000000000 value is not arbitrary, it is the minimum existential deposit for any Polkadot account. As such, when the Market account is being setup, the first deposit ever needs to meet this minimum to create the Market account.

An attempt to create a Market account with less than 1000000000, will produce the following error:

Error: Runtime error: Token error: Account cannot exist with the funds that would be given.

More information about the add_balance extrinsic is available in Pallets/Market Pallet/Add Balance.

withdraw-balance

The withdraw-balance withdraws balance from the market account of the extrinsic signer. Like add-balance, withdraw-balance takes a single AMOUNT argument; note that only free balance can be withdrawn. Likewise, withdrawal of a balance amount must be less than or equal to the free amount and greater than 0 (\({free} \ge {amount} \gt 0\)).

Parameters

NameDescriptionType
AMOUNTThe amount to be withdrawn to the market balancePositive integer

Example

Withdrawing 10000 Plancks from Alice's account.

storagext-cli --sr25519-key "//Alice" market withdraw-balance 10000

More about the withdraw_balance extrinsic is available in Pallets/Market Pallet/Withdraw Balance.

publish-storage-deals

The publish-storage-deals publishes storage deals that have been agreed upon off-chain. The deals are to be submitted by the storage provider, having been previously signed by the client.

Since this CLI is currently targeted at testing and demos, the client keypair is required to sign the deal. We know this is not secure and unrealistic in a production scenario (it is a good thing this is a demo)!

Parameters

The client keypair can be passed using --client-<key kind>, where <key kind> is one of the three supported keys, like the global keys, one is required.

NameDescriptionType
--client-sr25519-keySr25519 keypairString, encoded as hex, BIP-39 or a dev phrase like //Charlie
--client-ecdsa-keyECDSA keypairString, encoded as hex, BIP-39 or a dev phrase like //Charlie
--client-ed25519-keyEd25519 keypairString, encoded as hex, BIP-39 or a dev phrase like //Charlie
DEALSThe deals to be publishedJSON array. Can be passed as a string, or as a file path prefixed with @ pointing to the file containing the JSON object.

The DEALS JSON array is composed of objects:

NameDescriptionType
piece_cidByte encoded CIDCID
piece_sizeSize of the piecePositive integer
clientSS58 address of the storage clientSS58 address
providerSS58 address of the storage providerSS58 address
labelArbitrary client chosen labelString, with a maximum length of 128 characters
start_blockBlock number on which the deal should startPositive integer
end_blockBlock number on which the deal should endPositive integer, end_block > start_block
storage_price_per_blockPrice for the storage specified per block1Positive integer, in Plancks
provider_collateralCollateral which is slashed if the deal failsPositive integer, in Plancks
stateDeal state. Can only be set to PublishedString

Example

Publishing deals between Alice (the Storage Provider) and Charlie (the client).

storagext-cli --sr25519-key "//Alice" market publish-storage-deals \
  --client-sr25519-key "//Charlie" \
  "@deals.json"

Where deals.json is a file with contents similar to:

[
  {
    "piece_cid": "bafk2bzacecg3xxc4f2ql2hreiuy767u6r72ekdz54k7luieknboaakhft5rgk",
    "piece_size": 1337,
    "client": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "provider": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "label": "Super Cool (but secret) Plans for a new Polkadot Storage Solution",
    "start_block": 69,
    "end_block": 420,
    "storage_price_per_block": 15,
    "provider_collateral": 2000,
    "state": "Published"
  },
  {
    "piece_cid": "bafybeih5zgcgqor3dv6kfdtv3lshv3yfkfewtx73lhedgihlmvpcmywmua",
    "piece_size": 1143,
    "client": "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
    "provider": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
    "label": "List of problematic (but flying) Boeing planes",
    "start_block": 1010,
    "end_block": 1997,
    "storage_price_per_block": 1,
    "provider_collateral": 3900,
    "state": "Published"
  }
]

More information about the publish_storage_deals extrinsic is available in Pallets/Market Pallet/Publish Storage Deals.

settle-deal-payments

The settle-deal-payments command makes the storage provider receive the owed funds from storing data for their clients. Non-existing deal IDs will be ignored.

Anyone can settle anyone's deals, though there's little incentive to do so as it costs gas, so the Storage Provider will end up being the caller most of the time.

Parameters

NameDescription
DEAL_IDSThe IDs for the deals to be settled

Example

Settling deals with the IDs 97, 1010, 1337, 42069:

storagext-cli --sr25519-key "//Alice" market settle-deal-payments 97 1010 1337 42069

More information about the publish_storage_deals extrinsic is available in Pallets/Market Pallet/Settle Deal Payments.

retrieve-balance

The retrieve-balance command checks the balance of a given market account.

Parameters

NameDescription
ACCOUNT_IDThe IDs of the account being checked

Example

storagext-cli market retrieve-balance "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY" # Alice's account

This command is not signed, and does not need to be called using any of the --X-key flags.

The storage-provider command

Under the storage-provider subcommand Storage Provider related extrinsics are available. This chapter covers the provided commands and how to use them.

The storagext-cli getting started page covers the basic flags necessary to operate the CLI and should be read first.

register

The register command registers as a storage provider. Before a user can start providing storage, they need to register to be able to deal with the clients and perform any storage provider duties.

Parameters

NameDescriptionType
PEER_IDThe peer ID under which the registered provider will be trackedString
POST_PROOFThe proof type that the provider will use to prove storage (Default: 2KiB)String

Example

Registering the provider with the specific peer_id

storagext-cli --sr25519-key <key> storage-provider register <peer_id>

More information about the register extrinsic is available in Pallets/Storage Provider/Register.

pre-commit

The pre-commit command pre-commits a sector with deals that have been published by market publish-storage-deals. The pre-committed sector has to be proven or the deals will not activate and will be slashed.

Parameters

NameDescriptionType
PRE_COMMIT_SECTORSThe sector we are committing toJSON object. Can be passed as a string, or as a file path prefixed with @ pointing to the file containing the JSON object.

The PRE_COMMIT_SECTORS JSON object has the following structure:

NameDescription
sector_numberSector number
sealed_cidByte encoded CID
deal_idsList of deal IDs
expirationSector expiration
unsealed_cidByte encoded CID
seal_proofSector seal proof
seal_randomness_heightThe block number used in the PoRep proof

Example

Pre-commits a sector with specified deals.

storagext-cli --sr25519-key <key> storage-provider pre-commit \
    "@pre-commit-sector.json"

Where pre-commit-sector.json is a file with contents similar to:

[
  {
    "sector_number": 0,
    "sealed_cid": "bafk2bzaceajreoxfdcpdvitpvxm7vkpvcimlob5ejebqgqidjkz4qoug4q6zu",
    "deal_ids": [0],
    "expiration": 100,
    "unsealed_cid": "bafkreibme22gw2h7y2h7tg2fhqotaqjucnbc24deqo72b6mkl2egezxhvy",
    "seal_proof": "StackedDRG2KiBV1P1",
    "seal_randomness_height": 85
  }
]

More information about the pre_commit extrinsic is available in Pallets/Storage Provider/Pre-commit sector.

prove-commit

The prove-commit command proves a sector commitment. Currently, any proof that is a hex encoded string of length >= 1 is accepted. After the sector is proven, the deals will become Active.

Parameters

NameDescriptionType
PROVE_COMMIT_SECTORSThe sector we are provingJSON object. Can be passed as a string, or as a file path prefixed with @ pointing to the file containing the JSON object.

The PROVE_COMMIT_SECTORS JSON object has the following structure:

NameDescription
sector_numberSector number
proofHex encoded proof

Example

Proves a sector commitment.

storagext-cli --sr25519-key <key> storage-provider prove-commit \
    "@prove-commit-sector.json"

Where prove-commit-sector.json is a file with contents similar to:

[
  {
    "sector_number": 0,
    "proof": "beef"
  }
]

More information about prove_commit extrinsic is available in Pallets/Storage Provider/Prove-commit sector.

submit-windowed-post

The submit-windowed-post command submits a windowed PoSt proof. The post proof needs to be periodically submitted to prove that some sector is still stored. Sectors are proven in batches called partitions.

Parameters

NameDescriptionType
WINDOWED_POSTThe proof for some partitionJSON object. Can be passed as a string, or as a file path prefixed with @ pointing to the file containing the JSON object.

The WINDOWED_POST JSON object has the following structure:

NameDescription
deadlineDeadline ID
partitionPartition ID
PROOFJSON object

The PROOF JSON object has the following structure:

NameDescription
post_proofProof type ("2KiB" or "StackedDRGWindow2KiBV1P1")
proof_bytesHex encoded proof

Example

Proves partitions in a specific deadline.

storagext-cli --sr25519-key <key> storage-provider submit-windowed-post \
    "@window-proof.json"

Where window-proof.json is a file with contents similar to:

{
  "deadline": 0,
  "partitions": [0],
  "proof": {
    "post_proof": "2KiB",
    "proof_bytes": "07482439"
  }
}

More information about the submit_windowed_post extrinsic is available in Pallets/Storage Provider/Submit Windowed Post.

declare-faults

The declare-faults command declares faulty sectors. This is required to avoid penalties for not submitting Window PoSt at the required time.

Parameters

NameDescriptionType
FAULTSList of declared faultsJSON array. Can be passed as a string, or as a file path prefixed with @ pointing to the file containing the JSON array.

The FAULTS JSON object has the following structure:

NameDescription
deadlineDeadline ID
partitionPartition ID
sectorsFaulty sectors IDs

Example

Declares a list of faulty sectors in a specific deadline and partition.

storagext-cli --sr25519-key <key> storage-provider declare-faults \
    "@faults.json"

Where faults.json is a file with contents similar to:

[
  {
    "deadline": 0,
    "partition": 0,
    "sectors": [0]
  }
]

More information about the declare_faults extrinsic is available in Pallets/Storage Provider/Declare Faults.

declare-faults-recovered

The declare-faults-recovered command declares recovered faulty sectors.

Parameters

NameDescriptionType
RECOVERIESList of declared recoveriesJSON array. Can be passed as a string, or as a file path prefixed with @ pointing to the file containing the JSON array.

The RECOVERIES JSON object has the following structure:

NameDescription
deadlineDeadline ID
partitionPartition ID
sectorsFaulty sectors IDs

Example

Declares a list of sectors as recovered in a specific deadline and partition.

storagext-cli --sr25519-key <key> storage-provider declare-faults-recovered \
    "@recoveries.json"

Where recoveries.json is a file with contents similar to:

[
  {
    "deadline": 0,
    "partition": 0,
    "sectors": [0]
  }
]

More information about the declare_faults_recovered extrinsic is available in Pallets/Storage Provider/Declare Faults Recovered.

terminate-sectors

The terminate-sectors command terminates sectors and fully removes them.

Parameters

NameDescriptionType
TERMINATIONSList of declared TERMINATIONSJSON array. Can be passed as a string, or as a file path prefixed with @ pointing to the file containing the JSON array.

The RECOVERIES JSON object has the following structure:

NameDescription
deadlineDeadline ID
partitionPartition ID
sectorsIDs of sectors to be terminated

Example

Declares a list of sectors as recovered in a specific deadline and partition.

storagext-cli --sr25519-key <key> storage-provider terminate-sectors \
    "@terminations.json"

Where terminations.json is a file with contents similar to:

[
  {
    "deadline": 0,
    "partition": 0,
    "sectors": [0]
  }
]

More information about the terminate_sectors extrinsic is available in Pallets/Storage Provider/Terminate Sectors.

retrieve-storage-providers

The retrieve-storage-providers command retrieves all registered storage providers.

Example

Retrieving all registered storage providers

storagext-cli storage-provider retrieve-storage-providers

This command is not signed, and does not need to be called using any of the --X-key flags.

The proofs command

This command will be removed in the future. It's currently provided for easier testing.
The storagext-cli getting started page covers the basic flags necessary to operate the CLI and should be read first.

Under the proofs subcommand Proofs related extrinsics are available. This chapter covers the provided commands and how to use them.

set-porep-verifying-key

The set-porep-verifying-key adds PoRep verifying key to the chain.

Parameters

NameDescriptionType
KEYHex encoded verifying keyString

Example File

Adding a PoRep verifying key to the chain.

storagext-cli --sr25519-key "//Alice" proofs set-porep-verifying-key @2KiB.porep.vk.scale

The randomness command

Currently, the random value returned by the testnet node is same for all blocks.
The storagext-cli getting started page covers the basic flags necessary to operate the CLI and should be read first.

Under the randomness subcommand Randomness related extrinsics are available. This chapter covers the provided commands and how to use them.

get

The get command fetches random value for a specific block height. The returned value is hex encoded.

Parameters

NameDescriptionType
BLOCKBlock heightPositive integer

Example

Fetching random value for block 100.

storagext-cli randomness get 100

The system command

The command provides various utilities for interacting with the blockchain. It retrieves information about the current state of the chain.

The storagext-cli getting started page covers the basic flags necessary to operate the CLI and should be read first.

get-height

The command gets the current block height of the chain.

Example

Getting the current block height of the chain.

storagext-cli system get-height

wait-for-height

The command awaits for the chain to reach a specific block height. It will exit once the chain has reached the specified height.

Parameters

NameDescriptionType
HEIGHTThe block height to wait forPositive integer

Example

Waiting for the chain to reach block height 100.

storagext-cli system wait-for-height 100

The faucet command

Under the faucet subcommand faucet related extrinsics are available. This chapter covers the provided commands and how to use them.

drip

The drip command tops up the provided account.

Parameters

NameDescriptionType
ACCOUNTAccount ID to drip toAccount

Example

Topping up 5GpRRVXgPSoKVmUzyinpJPiCjfn98DsuuHgMV2f9s5NCzG19

storagext-cli faucet drip 5GpRRVXgPSoKVmUzyinpJPiCjfn98DsuuHgMV2f9s5NCzG19

Mater CLI

The Mater CLI is used by storage clients to convert files to the CARv2 format and extract CARv2 content.

Currently, the mater-cli only supports the CARv2 format. However, the mater library has full support for CARv1.

To learn more about the CAR format, please refer to the official specifications:

convert

The convert command converts a file to CARv2 format.

mater-cli convert <INPUT_PATH> [OUTPUT_PATH]

ArgumentDescription
<INPUT_PATH>Path to input file
[OUTPUT_PATH]Optional path to output CARv2 file. If no output path is given it will store the .car file in the same location as the input file.
-q/--quietIf enabled, only the resulting CID will be printed.
--overwriteIf enabled, the output will overwrite any existing files.

Example

$ mater-cli convert random1024.piece
Converted examples/random1024.piece and saved the CARv2 file at examples/random1024.car with a CID of bafkreidvyofebclo4kny43vpoe5kejg3mqtpq2eemaojzyvlwikwdvusxy

You can verify the output file using go-car:

$ car inspect examples/random1024.car
Version: 2
Characteristics: 00000000000000000000000000000000
Data offset: 51
Data (payload) length: 1121
Index offset: 1172
Index type: car-multihash-index-sorted
Roots: bafkreidvyofebclo4kny43vpoe5kejg3mqtpq2eemaojzyvlwikwdvusxy
Root blocks present in data: Yes
Block count: 1
Min / average / max block length (bytes): 1024 / 1024 / 1024
Min / average / max CID length (bytes): 36 / 36 / 36
Block count per codec:
        raw: 1
CID count per multihash:
        sha2-256: 1

extract

Convert a CARv2 file to its original format.

mater-cli extract <INPUT_PATH> [OUTPUT_PATH]

ArgumentDescription
<INPUT_PATH>Path to CARv2 file
[OUTPUT_PATH]Optional path to output file. If no output path is given it will remove the extension and store the file in the same location.

Example

$ mater-cli extract examples/random1024.car
Successfully converted CARv2 file examples/random1024.car and saved it to to examples/random1024

Conversly, you can also extract files generated using car:

# --no-wrap is necessary since mater does not perform wrapping
$ car create --no-wrap -f examples/random1024.go.car examples/random1024.piece
$ cargo run -r --bin mater-cli extract examples/random1024.go.car
Successfully converted CARv2 file examples/random1024.go.car and saved it to to examples/random1024.go

Zombienet Configuration Breakdown

Running the Zombienet requires a configuration file. This configuration file is downloaded during the third step of Linux/MacOS or can be copied from the first step of Running the parachain.

Similarities

The two files share most of the contents, so we'll start by covering their similarities. For more details refer to the zombienet documentation:

relaychain

NameDescription
chainThe relaychain name
default_argsThe default arguments passed to the command
default_commandThe default command to run the relaychain
nodesList of tables defining the nodes to run

nodes

NameDescription
nameThe node name
validatorWhether the node is a validator or not

parachains

A list of tables defining multiple parachains, in our case, we only care for our own parachain.

NameDescription
cumulus_basedWhether to use cumulus based generation
idThe parachain ID, we're using 1000 as a placeholder for now
collatorsList of tables defining the collators

collators

NameDescription
argsThe arguments passed to the command
commandThe command to run the collator
nameThe collator name
validatorWhether the collator is also a validator

Differences

The difference between them lies in the usage of container configurations:

NameDescription
image_pull_policyDefines when zombienet should pull an image; read more about it in the Kubernetes documentation
imageDefines which image to pull
ws_port/rpc_portDepending on the type of configuration (Native or Kubernetes), this variable sets the port for the collator RPC service

Glossary and Anti-Glossary

This document provides definitions and explanations for terms used throughout the project and a list of terms that should not be used.

Table of Contents

Glossary

This section lists terms used throughout the project.

Actor

In Filecoin, an actor is an on-chain object with its state and set of methods. Actors define how the Filecoin network manages and updates its global state.

Bond

This term is used in:

  • Parachain Slot Auction. To bid in an auction, parachain teams agree to lock up (or bond) a portion of DOT tokens for the duration of the lease. While bonded for a lease, the DOT cannot be used for other activities like staking or transfers.

  • Collator slot auction (selection mechanism). It is used as a deposit to become a collator. Candidates can register by placing the minimum bond. Then, if an account wants to participate in the collator slot auction, they have to replace an existing candidate by placing a more significant deposit (bond).

Collateral

Collaterals are assets locked up or deposited as a form of security to mitigate risks and ensure the performance of specific actions. Collateral acts as a guarantee that an individual will fulfil their obligations. Failing to meet obligations or behaving maliciously can result in the loss of staked assets or collateral as a penalty for non-compliance or misconduct by slashing.

Collator

Collators maintain parachains by collecting parachain transactions from users and producing state transition proofs for Relay Chain validators. In other words, collators maintain parachains by aggregating parachain transactions into parachain block candidates and producing state transition proofs (Proof-of-Validity, PoV) for validators. They must provide a financial commitment (collateral) to ensure they are incentivized to perform their duties correctly and to dissuade malicious behaviour.

Committed Capacity

The Committed Capacity (CC) is one of three types of deals in which there is effectively no deal, and the Storage Provider stores random data inside the sector instead of customer data.

If a storage provider doesn't find any available deal proposals appealing, they can alternatively make a capacity commitment, filling a sector with arbitrary data, rather than with client data. Maintaining this sector allows the storage provider to provably demonstrate that they are reserving space on behalf of the network.

Commitment of Data

This value is also known as commD or unsealed_cid. As the storage miner receives each piece of client data, they place it into a sector. Sectors are the fundamental units of storage in Filecoin, and can contain pieces from multiple deals and clients.

Once a sector is full, a CommD (Commitment of Data, aka UnsealedSectorCID) is produced, representing the root node of all the piece CIDs contained in the sector.

Commitment of Replication

The terms commR, sealed_cid, commitment of replication are interchangeable. During sealing, the sector data (identified by the CommD) is encoded through a sequence of graph and hashing processes to create a unique replica. The root hash of the merkle tree of the resulting replica is the CommRLast.

The CommRLast is then hashed together with the CommC (another merkle root output from Proof of Replication). This generates the CommR (Commitment of Replication, aka SealedSectorCID), which is recorded to the public blockchain. The CommRLast is saved privately by the miner for future use in Proof of Spacetime, but is not saved to the chain.

Crowdloan

Projects can raise DOT tokens from the community through crowdloans. Participants pledge their DOT tokens to help the project win a parachain slot auction. If successful, the tokens are locked up for the duration of the parachain lease, and participants might receive rewards or tokens from the project in return.

Deadline

A deadline is one of the multiple points during a proving period when proofs for some partitions are due.

For more information on deadlines, read the original Filecoin specification: https://spec.filecoin.io/#section-algorithms.pos.post.design

Extrinsics

From the Polkadot Wiki:

Within each functional pallet on the blockchain, one can call its functions and execute them successfully, provided they have the permission to do so. Because these calls originate outside of the blockchain runtime, such transactions are referred to as extrinsics.

Fault

A fault happens when a proof is not submitted within the proving period. For a sector to stop being considered in proving periods, it needs to be declared as faulty — indicating the storage provider is aware of the faulty sector and will be working to restore it. If a sector is faulty for too long, it will be terminated and the deal will be slashed.

For more information on faults, read the original Filecoin specification: https://spec.filecoin.io/#section-glossary.fault

Full Node

A device (computer) that fully downloads and stores the entire blockchain of the parachain, validating and relaying transactions and blocks within the network. It is one of the node types.

Invulnerable

A status assigned to certain collators that makes them exempt from being removed from the active set of collators.

Node

A device (computer) that participates in running the protocol software of a decentralized network; in other words, a participant of the blockchain network who runs it locally.

Parachain

A parachain is a specialized blockchain that runs in parallel to other parachains within a larger network, benefiting from shared security and interoperability, and can be validated by the validators of the Relay Chain.

Partition

Partitions are logical groups1 of sectors to be proven together.

The number of sectors to be proven at once is 23492, as defined by Filecoin.

For more information on partitions, read the original Filecoin specification: https://spec.filecoin.io/#section-algorithms.pos.post.constants--terminology

1

They do not reflect the physical storage state, only existing in the context of deadlines and proofs.

2

Filecoin defined the limit at 2349 to cope with computational limits, as described in the specification.

Planck

From the Polkadot Wiki:

The smallest unit for the account balance on Substrate based blockchains (Polkadot, Kusama, etc.) is Planck (a reference to Planck Length, the smallest possible distance in the physical Universe). DOT's Planck is like BTC's Satoshi or ETH's Wei. Polkadot's native token DOT equals to \(10^{10}\) Planck and Kusama's native token KSM equals to \(10^{12}\) Planck.

Polkadot

“Layer-0” blockchain platform designed to facilitate interoperability, scalability and security among different “Layer-1” blockchains, called parachains.

Proofs

Cryptographic evidence used to verify that storage providers have received, are storing, and are continuously maintaining data as promised.

There are two main types of proofs:

  • Proof-of-Replication (PoRep): In order to register a sector with the network, the sector has to be sealed. Sealing is a computation-heavy process that produces a unique representation of the data in the form of a proof, called Proof-of-Replication or PoRep.

  • Proof-of-Spacetime (PoSt): Used to verify that the storage provider continues to store the data over time. Storage providers must periodically generate and submit proofs to show that they are still maintaining the stored data as promised.

Proving Period

A proving period is when storage providers' commitments are audited, and they must prove they are still storing the data from the deals they signed

  • the average period for proving all sectors maintained by a provider (default set to 24 hours).

For more information on proving periods, read the original Filecoin specification:

Relay Chain

The Relay Chain in Polkadot is the central chain (blockchain) responsible for the network's shared security, consensus, and cross-chain interoperability.

Sector

The sector is the default unit of storage that providers put in the network. A sector is a contiguous array of bytes on which a storage provider puts together, seals,and performs Proofs of Spacetime on. Storage providers store data on the network in fixed-size sectors.

For more information on sectors, read the original Filecoin specification: https://spec.filecoin.io/#section-glossary.sector

Session

A predefined period during which a set of collators remains constant.

Slashing

The process of penalizing network participants, including validators, nominators, and collators, for various protocol violations. These violations could include producing invalid blocks, equivocation (double signing), inability of the Storage Provider to prove that the data is stored and maintained as promised, or other malicious activities. As a result of slashing, participants may face a reduction in their staked funds or other penalties depending on the severity of the violation.

Slot Auction

To secure a parachain slot, a project must win an auction by pledging (locking up) a significant amount of DOT tokens. These tokens are used as collateral to secure the slot for a specified period. Once the slot is secured, the project can launch and operate its parachain.

Staking

Staking is when DOT holders lock up their tokens to support the network's security and operations. In return, they can earn rewards. There are two main roles involved in staking:

  • Validators: Validators produce new blocks, validate transactions, and secure the network. They are selected based on their stake and performance. Validators must run a node and have the technical capability to maintain it.

  • Nominators: Nominators support the network by backing (nominating) validators they trust with their DOT tokens. Nominators share in the rewards earned by the validators they support. This allows DOT holders who don't want to run a validator node to still participate in the network's security and earn rewards.

Our parachain will use staking to back up the collators similarly to "Nominators". In this regard, "Nominators" will fall to Storage Providers, while " Validators" will be assigned to Collators accordingly.

Storage Provider

The user who offers storage space on their devices to store data for others.

Storage User

Aka Client: The user who initiates storage deals by providing data to be stored on the network by the Storage Provider.

System Parachain

System-level chains move functionality from the Relay Chain into parachains, minimizing the administrative use of the Relay Chain. For example, a governance parachain could move all the Polkadot governance processes from the Relay Chain into a parachain.

Anti-Glossary

This section lists terms that should not be used within the project, along with preferred alternatives.

Term to Avoid: Miner

In Filecoin, a "Lotus Miner" is responsible for storage-related operations, such as sealing sectors (PoRep (Proof-of-Replication)), proving storage (PoSt (Proof-of-Spacetime)), and participating in the Filecoin network as a storage miner.

Reason: In the Filecoin network, the miner simultaneously plays the roles of storage provider and block producer. However, this term cannot be used in the Polkadot ecosystem because there are no block producers in parachains; the Relay Chain is responsible for block production. Parachains can only prepare block candidates via the Collator node and pass them to the Relay Chain.

Term to Avoid: Pledge

It's better to apply this term within its proper context rather than avoiding it altogether. It's easy to confuse it with staking, but they have distinct meanings.

Reason: Pledging generally refers to locking up tokens as collateral to participate in certain network activities or services like: Parachain Slot Auctions and Crowdloans.