Requesting Data

1. Configure Request Batch (OnDemandMain)

function register_new_request_batch(
    RegistrationStructs.BatchConfiguration calldata bc) external payable
returns (address);

The initial call that starts all request batches. The creator must send in payment for using the ODO system and they are paying for the cost to deploy the contracts necessary to handle the user’s data requests in preparation to request the data.

struct RequestConfiguration
{
    // the name to identify the request batch
    string request_tag;

    // the user who is paying for the data request
    address creator;
    
    // the base amount in USD that users need to stake in order to provide
    // data for any of the datasets
    uint256 collateral;

    // the initial endorsements by which a user's data submission has to
    // lead in order to reach consensus
    uint256 required_lead;

    // a Unix timestamp of when the data should be ready off-chain for the
    // entire batch, unless overridden for individual datasets
    uint256 time_for_batch;

    // the length of time in seconds after consensus has been reached that
    // users with a monetary stake in the contract can dispute the
    // correctness of the consensus value
    uint256 dispute_period_length;
}
  • The request_tag is not stored on-chain, but it is emitted in an event to help the front end know what to display (particularly useful to get info if other projects build on top of the ODO and request data that doesn’t go through the Modefi front end).

  • The creator field must be set as the transaction signer (there is no way to deploy on someone’s behalf, but there are ways to give up some control; see Setting Times)

  • The time_for_batch is an optional parameter that can be left as 0 to either set the time later or only use custom dataset times (see Setting Times section).

  • The collateral, required_lead, and dispute_period_length are used to determine the actual stake and payout values (see Algorithms section).

2. Configure Data (OnDemandMain)

function add_to_request_batch(
    RegistrationStructs.DatasetConfigurations calldata dc) external payable;
struct DatasetConfigurations
{
    // the address of a ODDelegator contract that has been deployed already
    address delegator;

    // the list of dataset configurations to be requested from the On-Demand
    // Oracle system
    DataRequest[] data_requests;
}
struct DataRequest
{
    // the name of the dataset (unique within a request batch)
    string tag;

    // the amount of data in the requested dataset; should be 1 for a single
    // datum and the length of the list when requesting an ordered list of
    // values
    uint256 data_required;

    // the type of the data in this dataset; used to determine where the
    // intermediate data must be stored
    Types.Type data_type;

    // a value that might be required to interpret the value, such as the
    // number of decimal places in a decimal number or a protocol to
    // interpret bits in a raw bytes data type.
    uint256 interpretation_key;
}

Depending on how many datasets are requested in a batch, the dataset configuration might be split up across more than one transaction. These transactions can be sent once the initial batch registration transaction has been mined.

Because adding data affects the pay rate and it is not fair if the payout changes after someone staked, datasets can only be added until staking starts for the first dataset in the request batch.

  • The tag is not stored on-chain, but it is emitted in an event to help the front end know what to display as in the case of the request_tag.

  • The data_required and data_types are the main configuration parameters that are set for the length of list data and the data type, such as a string or positive integer, respectively.

  • The interpretation_key is usually not used and can remain 0, but, as mentioned in the comment, it can be used to configure the number of decimal places to use for any decimal-based datasets

3. [Optional] Configure Constraints ([DataType]Constraints)

function edit_restriction_types(
    address delegator_address,
    RestrictionEditData[] calldata restriction_data) external;
function add_expected_outcomes(
    address delegator_address,
    uint256 dataset_id,
    uint256[] calldata new_expected_outcomes) external;
function remove_expected_outcomes(
    address delegator_address,
    uint256 dataset_id,
    uint256[] calldata outcome_indices,
    uint256[] calldata outcomes_to_remove) external;

In order to help better-communicate the expected data and ensure the data is valid in both value and formatting, one of two types of constraints can, optionally, be added; specified outcomes or a bounds type restriction.

Specified outcomes can be used to ensure the value and formatting are correct. For example, if the possible winners of the match can be Hikaru Nakamura or Magnus Carlsen, then the outcomes can be stores as “Hikaru Nakamura” and “Magnus Carlsen”, which prevents getting incorrectly formatted results such as “magnus” or “carlsen” or “magnus carlsen” instead of “Magnus Carlsen”.

Bounds type restrictions come in eight different types depending on the number of endpoints and whether or not the endpoints are included.

Type

Lower Bound (L)

Upper Bound (U)

Accepted Range { x }

LowerBoundExclusive

excluded

N/A

L < x

LowerBoundInclusive

included

N/A

L ≤ x

UpperBoundInclusive

N/A

included

x ≤ U

UpperBoundExclusive

N/A

excluded

x < U

ExclusiveRange

excluded

excluded

L < x < U

UpperExclusiveRange

included

excluded

L ≤ x < U

LowerExclusiveRange

excluded

included

L < x ≤ U

InclusiveRange

included

included

L ≤ x ≤ U

Restrictions can be altered and removed at any time, even after data has already been provided. The reason for this is that the endorser should, in principle, be able to send in the correct data without the suggested outcomes. If an outcome is added then an endorser couldn’t have been endorsing that data, since it wasn’t an option before. If an outcome is removed, then the only way the endorser can be affected is if they were about to provide the wrong data, so this is actually helpful in this edge case.

4. [Optional] Increasing The Payout (OnDemandMain)

function add_tip_to_datasets(address delegator, Tipping.Tip[] calldata tips)
external payable;

This feature exists to help encourage endorsers to get the data for a dataset if the reason the endorsers aren’t validating the dataset is because the pay is too low. In the volatile world of crypto an asset losing value can make it not worth it to get the data.

For example, say the minimum payout is 1 FTM per datum and a creator chooses the minimum payout. Furthermore, say that the asset in terms of USD loses 20% of its value after the request has been deployed. In this case, every new request will have a minimum of 1.25 FTM per datum. If there are enough new data requests, then no one will want to take the old request at 1 FTM per datum. Increasing the pay allows an older data request to be brought in line with newer requests to increase the likelihood of it getting picked up.

5. [Optional] Change Decimal Places Of Decimal-Based Datasets (<DecimalDataType>)

function change_decimals(
    address delegator_address,
    uint256 dataset_id,
    uint256 decimals) external;

The final way the creator can alter their dataset is to change the number of decimal places to use when interpreting the data. This affects how the data providers format their data, so the formatting must be finalized before staking begins.

Last updated