Benchmark individual proofs
In this section we will cover how to run the benchmarks for the individual proofs. The benchmarks are located in the
light-client
crate folder. Those benchmarks are associated with programs that are meant to reproduce
a production environment settings. They are meant to measure performance for a complete end-to-end flow.
The numbers we've measured using our production configuration are further detailed in the following section.
Longest chain change
Benchmark that will run a proof generation for the Longest chain program. The program is described in its design document.
On our production configuration, we currently get the following results for SNARK generation for this benchmark:
For SNARKS:
{
"proving_time": 407394,
"verification_time": 4
}
Storage inclusion
Benchmark that will run a proof generation for the SPV program. The program is described in its design document.
On our production configuration, we currently get the following results for SNARK generation for this benchmark:
For SNARKS:
{
"proving_time": 406711,
"verification_time": 4
}
Running the benchmarks
Using Makefile
To ease benchmark run we created a Makefile in the light-client
crate folder. Just run:
make benchmark
Info
By default, the proof generated will be a STARK proof. To generate a SNARK proof, use the
MODE=SNARK
environment variable.
You will then be asked for the name of the benchmark you want to run. Just fill in the one that is of interest to you:
$ make benchmark
Enter benchmark name: longest_chain
...
Manual
Run the following command:
cargo bench --bench execute -- <benchmark_name>
Warning
Make sure to set the environment variables as described in the configuration section.
Interpreting the results
Before delving into the details, please take a look at the cycle tracking documentation from SP1 to get a rough sense of what the numbers mean.
The benchmark will output a lot of information. The most important parts are the following:
Total cycles for the program execution
This value can be found on the following line:
INFO summary: cycles=63736, e2e=2506, khz=25.43, proofSize=2.66 MiB
It contains the total number of cycles needed for the program, the end-to-end time in milliseconds, the frequency of the CPU in kHz, and the size of the proof generated.
Specific cycle count
In the output, you will find a section that looks like this:
DEBUG ┌╴read_inputs
DEBUG └╴9,553 cycles
DEBUG ┌╴verify_merkle_proof
DEBUG └╴40,398 cycles
These specific cycles count are generated by us to track the cost of specific operations in the program.
Proving time
The proving time is an output at the end of a benchmark in the shape of the following data structure, with each time in milliseconds:
{
proving_time: 100000,
verifying_time: 100000
}
Alternative
Another solution to get some information about proving time is to run the tests located in the light-client
crate. They will output the same logs as the benchmarks, only the time necessary to generate a proof will change shape:
Starting generation of Merkle inclusion proof with 18 siblings...
Proving locally
Proving took 5.358508094s
Starting verification of Merkle inclusion proof...
Verification took 805.530068ms
To run the test efficiently, first install
nextest
following its documentation. Ensure that you also have the previously described environment variables set, then run the following command:
SHARD_BATCH_SIZE=0 cargo nextest run --verbose --release --profile ci --package kadena-lc --no-capture --all-features
Note
The
--no-capture
flag is necessary to see the logs generated by the tests.
Some tests are ignored by default due to heavier resource requirements. To run them, pass --run-ignored all
to nextest
.