-
Notifications
You must be signed in to change notification settings - Fork 53
Interface
Summary: We describe how inputs and outputs to the NEXT system are specified using YAML and Javascript. This will only describe the user-specified arguments, not the complete set of arguments (which is specified in Framework-Apps).
The app interface is defined in next/apps/<app_id>/myApp.yaml
. This specifies exactly what arguments are sent to the application.
For example, if myApp.yaml
(which must follow the YAML standard) includes the lines
R:
description: sub-Gaussian parameter, e.g. E[exp(t*X)]<=exp(t^2 R^2/2), defaults to R=0.5 (satisfies X \in [0,1])
type: num
default: 0.5
optional: true
the key 'R'
will be present in the arguments dictionary. It's must be a number and is optional. We use the library pijemont for this specification; see that library for more details.
This YAML interface is recursive; you can say type: dict
and specify the values and keys in that dictionary.
rating_scale:
description: A set of ratings that are presented to the user on the query page.
type: oneof
values:
scale_parameter:
description: Sub-Gaussian parameter, e.g. E[exp(t*X)]<=exp(t^2 R^2/2), defaults to R=0.5 (satisfies X \in [0,1])
type: num
labels:
description: List of dictionaries with label and reward keys, (i.e. [{'label':'unfunny','reward':0.},{'label':'somewhat funny','reward':0.5},{'label':'funny','reward':1.}])
type: list
We have found that the default applications implement a common interface, meaning we have written a base.yaml
that all the default applications inherit from.
A complete example can be found in the examples/
directory.
This section of the tutorial will discuss the interface specification for an application following the one given in PoolBasedClassification.yaml
.
Since many of the inputs are common across all applications, a common set of
inputs is described in next/apps/base.yaml
(whose contents are also described
in Base-Interface and should not be changed!). All interface files are in a
standard YAML format (google it), but these must implement the specific format
described specified by pijemont in "API dictionary format".
The app
yaml file, PoolBasedClassification.yaml
, contains the application-specific inputs that are merged/inherited with the inputs contained in base.yaml
to define the specification for each function:
extends: [base.yaml]
initExp:
args:
args:
values:
alg_list:
values:
values:
alg_id:
description: Supported algorithm types for PoolBasedTripletMDS.
values: [RandomSamplingLinearLeastSquares]
test_alg_label:
description: alg_label of the algorithm whose collected labels to use as a test set when validating this algorithm. A resulting plot of test-error on points is available on the dashboard.
type: str
targets:
values:
targetset:
values:
values:
meta:
type: dict
values:
features:
description: A feature vector for this item.
type: list
values:
type: num
processAnswer:
args:
args:
description: Arguments for initExp
type: dict
values:
target_label:
description: The label in {-1,1} provided by participant of the target.
type: num
Let's go through these in more detail.
-
initExp
: Translated to a JSON object, the following is an example of what will be posted to a server running NEXT:
{
"app_id": "PoolBasedBinaryClassification",
"args": {
"instructions": "Example instructions",
"debrief": "Example debrief",
"alg_list": [
{
"alg_label": "Test",
"alg_id": "RandomSamplingLinearLeastSquares",
"test_alg_label": "Test"
},
{
"alg_label": "RandomSamplingLinearLeastSquares",
"alg_id": "RandomSamplingLinearLeastSquares",
"test_alg_label": "Test"
}
],
"algorithm_management_settings": {
"params": [
{
"alg_label": "Test",
"proportion": 0.2
},
{
"alg_label": "RandomSamplingLinearLeastSquares",
"proportion": 0.8
}
],
"mode": "fixed_proportions"
},
"participant_to_algorithm_management": "one_to_many",
"targets": {
"targetset": [
{
"meta": {"features": [-2.539, -0.816]},
"alt_type": "text",
"primary_type": "text",
"primary_description": "Target 0",
"alt_description": "0"
},
{
"meta": {"features": [0.0817, -0.441]},
"alt_type": "text",
"primary_type": "text",
"primary_description": "Target 1",
"alt_description": "1"
},
{
"meta": {"features": [1.732, -1.882]},
"alt_type": "text",
"primary_type": "text",
"primary_description": "Target 2",
"alt_description": "2"
},
{
"meta": {"features":[1.0472, -0.0269]},
"alt_type": "text",
"primary_type": "text",
"primary_description": "Target 3",
"alt_description": "3"
},
{
"meta": {"features": [-0.396, 1.159]},
"alt_type": "text",
"primary_type": "text",
"primary_description": "Target 4",
"alt_description": "4"
},
...
]
},
},
}
Let us now see how we arrived at this as the input that will initialize the experiment by looking at the YAML specification for initExp
. For example, the need for app_id
is specified in base.yaml lines 4-6. Here, we find the requirement that this value is a string (line 6) and it is mentioned (line 5) that this is the ID (i.e. the name) of the application (so, in our case, "PoolBasedBinaryClassification"). In a similar way, the meta
key in each target is explicitly specified in targets
.
TODO: INSERT EXAMPLE RESPONSE FROM NEXT, INSERT INCORRECT RESPONSE
-
getQuery
:getQuery
is not explicitly specified in theapp
yaml (there is no application specific information) and so will be inherited frombase.yaml
. An example post:
And an example response:
This response will be discussed in more detail later, for now it suffices to note that it consists of:
-
processAnswer
: TheprocessAnswer
interface requires aquery_id
and a label for the item given in this query. Example JSON input:
Example response:
-
getModel
: We will discussgetModel
when we talk more about dashboards. Note,getModel
is rarely explicitly called through the a post request and is rather a convenience function for dashboards/app specific roles.
TODO: GETMODEL IN POOLBASEDBINARYCLASSIFICATION SHOULD TAKE IN AN ALGLABEL