BlockDataStore
Documentation for eth_defi.event_reader.block_data_store.BlockDataStore Python class.
- class BlockDataStore[source]
Persistent storage interface to store aby processed data from blockchains.
Store any block data that is block oriented
Input is indexed by the block number
Input and output as py:class:pd.DataFrame
Append writes with chain reorganisation support
Partial tail reads
Used for
Cache downlaoded block headers and timestamps, so you do not need to fetch them again over JSON-RPC when restarting an application.
Cache downlaoded trades
The input data
Must be py:class:pd.DataFrame
Must have key block_number
Must have key partition if the storage implementation does partitioning. This can be the block number rounded down to the nearest partition chunk.
Methods summary
__init__
()Has this store any stored data.
load
([since_block_number])Load data from the store.
Get the block number of the last data entry stored.
save
(data)Save to the store.
save_incremental
(data)Save the latest data to the store.
- abstract is_virgin()[source]
Has this store any stored data.
- Returns
There is data to load.
- Return type
- abstract load(since_block_number=0)[source]
Load data from the store.
- Parameters
since_block_number (int) –
Return only blocks after this (inclusive).
The actual read datasets may contain more blocks due to partition boundaries.
- Returns
Data read from the store.
- Return type
- abstract save(data)[source]
Save to the store.
- Parameters
data (pandas.core.frame.DataFrame) –
- abstract save_incremental(data)[source]
Save the latest data to the store.
Write the minimum amount of data to the disk we think is
Valid
Needs to be written to keep partitions intact
Usually this is data worth of two partitions.
- Parameters
data (pandas.core.frame.DataFrame) – Must have column ‘block_number’. Must have column partition if partitioning is supported.
- Returns
Block range written (inclusive).
- Return type