Skip to content

Commit

Permalink
Make the avg_problem_grader just be the weighted grader.
Browse files Browse the repository at this point in the history
The weighted grader really is just a more advanced version of the
average problem grader that allows weights and "credit". If weights are
not assigned then a weight of 1 is assumed for each answer. So if no
weights are assigned then the weighted grader returns exactly the same
thing as the previous average problem grader.

The `weightedGrader.pl` macro should be considered deprecated. The
`WEIGHTED_ANS`, `NAMED_WEIGHTED_ANS`, and `CREDIT_ANS` methods should no
longer be used (although they will still work even with the new average
problem grader). Instead of calling `WEIGHTED_ANS` or
`NAMED_WEIGHTED_ANS`, pass `weight => n` to the `cmp` method to assign a
weight to an answer. Instead of calling `CREDIT_ANS` pass `credit => $answer1`
or `credit => [ $answer1, $answer2, ... ]` to the `cmp` method. That is
effectively what those methods do anyway. Note that the other answers
need to be assigned a name using the `NEW_ANS_NAME` method.

Note that if the `weightedGrader.pl` macro is loaded and
`install_weighted_grader` is not called, then using the `WEIGHTED_ANS`,
`NAMED_WEIGHTED_ANS`, and `CREDIT_ANS` methods will work with this since
the implementation is completely compatible with the `weighted_grader`
defined in the macro. Also if the macro is loaded and
`install_weighted_grader` is called, then the macro will continue to
work as before. The options can either be set using the macro method, or
as described in the previous paragraph (except the `credit` option must
be an array reference with the macro weighted grader).

There is one essential difference in this implementation from the
previous weighted grader when the `credit` option is used. The previous
weighted grader would mark optional answers correct, but not change
their scores. This results in the feedback for those answers showing
them as incorrect and the message shown above the problem stating that
not all answers are correct, even though the overall problem score for
the attempt is reported as 100%. The documentation in the
`weightedGrader.pl` macro states that "When credit IS given, the blank
answer is still marked as incorrect in the grey answer report at the top
of the page, but the student gets awarded the points for that answer
anyway (with no other indication). It is possible to cause the blank to
be marked as correct, but this seemed confusing to the students."
However, perhaps due to changes in feedback, it actually seemed much
more confusing to me for the answers to be marked incorrect and have the
red incorrect feedback and message above stating not all answers correct
with the message way below that is often ignored by students stating
that the score for the attempt is 100%. So this does what is suggested
in the documentation and actually changes the scores for the optional
answers. Furthermore, a message is added to those answers stating, "This
answer was marked correct because the primary answer is correct." I am
not sold on that wording, but this seems much clearer to me. Another
option would be to not set the scores of the optional parts, but set the
`$problem_result{msg}` informing the user what is going on. That message
is displayed immediately below the problem as "Note: $problem_result{msg}".
Since that is closer to the problem and not way down below all of the
submit buttons that might be enough?
  • Loading branch information
drgrice1 committed Dec 9, 2024
1 parent 0ca9c31 commit c4b2f70
Show file tree
Hide file tree
Showing 2 changed files with 168 additions and 63 deletions.
81 changes: 53 additions & 28 deletions lib/WeBWorK/PG/Translator.pm
Original file line number Diff line number Diff line change
Expand Up @@ -1088,46 +1088,71 @@ sub rf_avg_problem_grader {
}

sub avg_problem_grader {
my ($rh_evaluated_answers, $rh_problem_state, %form_options) = @_;
my ($answers, $problem_state, %form_options) = @_;

my %evaluated_answers = %{$rh_evaluated_answers};
my %problem_result = (score => 0, errors => '', type => 'avg_problem_grader', msg => '');

# By default the old problem state is simply passed back out again.
my %problem_state = %$rh_problem_state;
$problem_result{msg} = maketext('You can earn partial credit on this problem.') if keys %$answers > 1;

# Initial setup of the answer
my $total = 0;
my %problem_result = (
score => 0,
errors => '',
type => 'avg_problem_grader',
msg => '',
);
# Return unless answers have been submitted.
return (\%problem_result, $problem_state) unless $form_options{answers_submitted} == 1;

my %credit;

# Get the score for each answer (error if can't recognize the answer format).
for my $ans_name (keys %$answers) {
if (ref($answers->{$ans_name}) =~ m/^(HASH|AnswerHash)$/) {
$credit{$ans_name} = $answers->{$ans_name}{score} // 0;
} else {
$problem_result{error} = "Error: Answer $ans_name is not a hash: $answers->{$ans_name}";
die "Error: Answer |$ans_name| is not a hash reference\n"
. $answers->{$ans_name}
. "\nThis probably means that the answer evaluator for this answer is not working correctly.";
}
}

my $count = keys %evaluated_answers;
$problem_result{msg} = 'You can earn partial credit on this problem.' if $count > 1;
# Mark any optional answers as correct, if the goal answers are right and the optional answers are blank.
for my $ans_name (keys %$answers) {
if ($credit{$ans_name} == 1 && defined $answers->{$ans_name}{credit}) {
for my $credit_name (
ref($answers->{$ans_name}{credit}) eq 'ARRAY'
? @{ $answers->{$ans_name}{credit} }
: $answers->{$ans_name}{credit})
{
if (!defined $answers->{$credit_name}{student_ans}
|| $answers->{$credit_name}{student_ans} =~ m/^\s*$/)
{
$answers->{$credit_name}{score} = 1;
$answers->{$credit_name}{ans_message} =
maketext('This answer was marked correct because the primary answer is correct.');
$credit{$credit_name} = 1;
}
}
}
}

return (\%problem_result, \%problem_state) unless $form_options{answers_submitted} == 1;
my ($score, $total) = (0, 0);

# Answers have been submitted -- process them.
for my $ans_name (keys %evaluated_answers) {
$total += $evaluated_answers{$ans_name}{score};
# Add up the weighted scores
for my $ans_name (keys %$answers) {
my $weight = $answers->{$ans_name}{weight} // 1;
$total += $weight;
$score += $weight * $credit{$ans_name};
}

# Calculate score rounded to three places to avoid roundoff problems
$problem_result{score} = $count ? $total / $count : 0;
$problem_state{recorded_score} //= 0;
$problem_result{score} = $total ? $score / $total : 0;

# Increase recorded score if the current score is greater.
$problem_state{recorded_score} = $problem_result{score}
if $problem_result{score} > $problem_state{recorded_score};
++$problem_state->{num_of_correct_ans} if $score == $total;
++$problem_state->{num_of_incorrect_ans} if $score < $total;
$problem_state->{recorded_score} //= 0;

++$problem_state{num_of_correct_ans} if $total == $count;
++$problem_state{num_of_incorrect_ans} if $total < $count;
# Increase recorded score if the current score is greater.
$problem_state->{recorded_score} = $problem_result{score}
if $problem_result{score} > $problem_state->{recorded_score};

warn "Error in grading this problem the total $total is larger than $count" if $total > $count;
warn "Error in grading this problem: The score $score is larger than the total $total." if $score > $total;

return (\%problem_result, \%problem_state);
return (\%problem_result, $problem_state);
}

=head2 post_process_content
Expand Down
150 changes: 115 additions & 35 deletions macros/core/PGanswermacros.pl
Original file line number Diff line number Diff line change
Expand Up @@ -1587,62 +1587,142 @@ sub std_problem_grader2 {

=head3 C<avg_problem_grader>
This grader gives a grade depending on how many questions from the problem are correct. (The highest
grade is the one that is kept. One can never lower the recorded grade on a problem by repeating it.)
Many professors (and almost all students :-) ) prefer this grader.
This grader gives a "weighted" average score to the problem and is the default
grader.
The grader can be selected by calling
install_problem_grader(~~&avg_problem_grader);
However, since this is the default grader, that is not necessary to use this
grader.
Each answer is assigned a weight (the default is 1). The score is then the sum
of the product of the weights and scores for the correct answers divided by the
total of the weights for all answers. (To assign weights as percentages, use
integers that add up to 100. For example, use 40 and 60 for the weights for two
answers.) Assign weights to answers using the C<cmp> option C<< weight => n >>.
For example, in PGML create the answer rule with
[_]{$answer}{10}{ cmp_options => { weight => 40 } }
With the classic C<ANS> method call
ANS($answer->cmp(weight => 40);
This grader also allows for one "goal" answer that is answered correctly to
automatically give credit for one or more other "optional" answers. This way, if
there are several "optional" answers leading up to the "goal" answer, and the
student produces the "goal" answer by some other means and does not answer the
"optional" answers, the student can be given full credit for the problem anyway.
To use this feature use the C<credit> option of the C<cmp> method for the "goal"
answer. For example, C<< credit => $answer1Name >> or C<< credit => [
$answer1Name, $answer2Name, ... ] >>, where C<$answer1Name>, C<$answer2Name>,
etc., are the names of the "optional" answers that will be given credit if the
"goal" answer is correct. Note that the other answers must be assigned names
either by calling C<NAMED_ANS_RULE> and C<NAMED_ANS>, or by creating the answer
rule in PGML with C<[_]{$answer1}{15}{$answer1Name}>, for example. The answer
names should be generated by calling C<NEW_ANS_NAME> (for example,
C<$answer1Name = NEW_ANS_NAME()>) rather than being made up. Otherwise the
problem will fail to work in many situations (for example, in tests). For
example, to set this up in PGML use
BEGIN_PGML
Optional Answer 1: [_]{$answer1}{10}{$answer1Name = NEW_ANS_NAME()}
Optional Answer 2: [_]{$answer2}{10}{$answer2Name = NEW_ANS_NAME()}
Goal: [_]{$answer3}{10}{ cmp_options => { credit => [ $answer1Name, $answer2Name ] } }
END_PGML
Note that the C<credit> and C<weight> options can be used together. For example:
BEING_PGML
Optional Answer: [_]{$optional}{10}{$optionalName = NEW_ANS_NAME()}{{ weight => 20 }}
Goal: [_]{$goalAnswer}{10}{ cmp_options => { credit => $optionalName, weight => 80 } }
END_PGML
This way, if the "optional" answer is correct but the "goal" answer is not, the
problem score will be 20%, but if the "goal" answer is correct, the problem
score will be 100%.
One caveat to keep in mind is that credit is given to an "optional" answer ONLY
if the answer is left blank (or is actually correct). Credit will NOT be given
if an "optional" answer is incorrect, even if the "goal" answer IS correct.
When credit is given to an "optional" answer due to the "goal" answer being
correct, a message will be added to the "optional" answer stating, "This answer
was marked correct because the primary answer is correct."
=cut

sub avg_problem_grader {
my ($rh_evaluated_answers, $rh_problem_state, %form_options) = @_;
my ($answers, $problem_state, %form_options) = @_;

my %evaluated_answers = %{$rh_evaluated_answers};
my %problem_result = (score => 0, errors => '', type => 'avg_problem_grader', msg => '');

# By default the old problem state is simply passed back out again.
my %problem_state = %$rh_problem_state;

# Initial setup of the answer.
my $total = 0;
my %problem_result = (
score => 0,
errors => '',
type => 'avg_problem_grader',
msg => '',
);
my $count = keys %evaluated_answers;
$problem_result{msg} = maketext('You can earn partial credit on this problem.') if $count > 1;
$problem_result{msg} = maketext('You can earn partial credit on this problem.') if keys %$answers > 1;

# Return unless answers have been submitted.
return (\%problem_result, \%problem_state) unless $form_options{answers_submitted} == 1;
return (\%problem_result, $problem_state) unless $form_options{answers_submitted} == 1;

# Answers have been submitted -- process them.
for my $ans_name (keys %evaluated_answers) {
if (ref $evaluated_answers{$ans_name} eq 'HASH' || ref $evaluated_answers{$ans_name} eq 'AnswerHash') {
$total += $evaluated_answers{$ans_name}{score} // 0;
my %credit;

# Get the score for each answer (error if can't recognize the answer format).
for my $ans_name (keys %$answers) {
if (ref($answers->{$ans_name}) =~ m/^(HASH|AnswerHash)$/) {
$credit{$ans_name} = $answers->{$ans_name}{score} // 0;
} else {
$problem_result{error} = "Error: Answer $ans_name is not a hash: $answers->{$ans_name}";
die "Error: Answer |$ans_name| is not a hash reference\n"
. $evaluated_answers{$ans_name}
. 'This probably means that the answer evaluator for this answer is not working correctly.';
$problem_result{error} = "Error: Answer $ans_name is not a hash: $evaluated_answers{$ans_name}";
. $answers->{$ans_name}
. "\nThis probably means that the answer evaluator for this answer is not working correctly.";
}
}

# Calculate the score.
$problem_result{score} = $total / $count if $count;
# Mark any optional answers as correct, if the goal answers are right and the optional answers are blank.
for my $ans_name (keys %$answers) {
if ($credit{$ans_name} == 1 && defined $answers->{$ans_name}{credit}) {
for my $credit_name (
ref($answers->{$ans_name}{credit}) eq 'ARRAY'
? @{ $answers->{$ans_name}{credit} }
: $answers->{$ans_name}{credit})
{
if (!defined $answers->{$credit_name}{student_ans}
|| $answers->{$credit_name}{student_ans} =~ m/^\s*$/)
{
$answers->{$credit_name}{score} = 1;
$answers->{$credit_name}{ans_message} =
maketext('This answer was marked correct because the primary answer is correct.');
$credit{$credit_name} = 1;
}
}
}
}

++$problem_state{num_of_correct_ans} if $total == $count;
++$problem_state{num_of_incorrect_ans} if $total < $count;
$problem_state{recorded_score} //= 0;
my ($score, $total) = (0, 0);

# Add up the weighted scores
for my $ans_name (keys %$answers) {
my $weight = $answers->{$ans_name}{weight} // 1;
$total += $weight;
$score += $weight * $credit{$ans_name};
}

$problem_result{score} = $total ? $score / $total : 0;

++$problem_state->{num_of_correct_ans} if $score == $total;
++$problem_state->{num_of_incorrect_ans} if $score < $total;
$problem_state->{recorded_score} //= 0;

# Increase recorded score if the current score is greater.
$problem_state{recorded_score} = $problem_result{score}
if $problem_result{score} > $problem_state{recorded_score};
$problem_state->{recorded_score} = $problem_result{score}
if $problem_result{score} > $problem_state->{recorded_score};

warn "Error in grading this problem the total $total is larger than $count" if $total > $count;
warn "Error in grading this problem: The score $score is larger than the total $total." if $score > $total;

return (\%problem_result, \%problem_state);
return (\%problem_result, $problem_state);
}

=head2 Utility subroutines
Expand Down

0 comments on commit c4b2f70

Please sign in to comment.