You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the schema, it looks like the information we need will come from the click_submit_answer events:
"click_submit_answer": {
"description": "When the player presses the button to submit their answer to a quiz question",
"event_data": {
"quiz_task" : {
"type": "Dict[str, Any]",
"details": {
"category":"enum(MULTIPLE_CHOICE, MULTIPLE_SELECT, WORD_BANK)",
"lab_name":"str",
"section_number":"str",
"task_number":"int",
"is_active":"bool",
"is_complete":"bool",
"available_tools":"List[enum(INSULATION, LOWER_STOP, UPPER_STOP, INCREASE_WEIGHT, DECREASE_WEIGHT, HEAT, COOLING, CHAMBER_TEMPERATURE, CHAMBER_PRESSURE)]",
"prompts":"List[str]",
"options":"List[str | float]",
"selected_options":"List[str | float]",
"correct_answer":"float | str | List[float] | List[str]"
},
"description": "A list of the tasks for the current lab section shown to the player (not including any tasks that are available but undisplayed)."
},
"is_correct_answer": {
"type": "bool",
"description": "Indicator for whether the submitted selection is correct or not (note, this could be derived from the `quiz_task`)."
},
"hand": {
...
}
}
}
The work on this feature should start with a review of data by exporting a few days worth of events.
Check the click_submit_answer events in that export; the primary issue will be to figure out how to uniquely identify each quiz question.
I think it will be a combination of lab_name, section_number, and task_number, but I'm not certain whether multiple questions can be in the same task or not. If you find different prompts under the same task, we'll probably need to include prompts to pick out the list of individual quizzes players could do.
Once we have this list, we can write the feature as a per-count feature, similar to how we did per-county features in Bloom:
We can hard-code a list of available quizzes (e.g. as "labname.sectionnumber.tasknumber", so an example might be "1.3.4" for lab 1, section 3, task 4), and then in the _validateEventCountIndex function, we can look up the index of the quiz in the list to compare with self.CountIndex.
The text was updated successfully, but these errors were encountered:
Based on the schema, it looks like the information we need will come from the
click_submit_answer
events:The work on this feature should start with a review of data by exporting a few days worth of events.
Check the
click_submit_answer
events in that export; the primary issue will be to figure out how to uniquely identify each quiz question.I think it will be a combination of
lab_name
,section_number
, andtask_number
, but I'm not certain whether multiple questions can be in the same task or not. If you find differentprompts
under the same task, we'll probably need to includeprompts
to pick out the list of individual quizzes players could do.Once we have this list, we can write the feature as a per-count feature, similar to how we did per-county features in Bloom:
We can hard-code a list of available quizzes (e.g. as "labname.sectionnumber.tasknumber", so an example might be "1.3.4" for lab 1, section 3, task 4), and then in the
_validateEventCountIndex
function, we can look up the index of the quiz in the list to compare withself.CountIndex
.The text was updated successfully, but these errors were encountered: