-
Notifications
You must be signed in to change notification settings - Fork 2
/
03-empirical.Rmd
1766 lines (1086 loc) · 86.3 KB
/
03-empirical.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Empirical plots from FROC data {#empirical}
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(RJafroc)
library(ggplot2)
library(kableExtra)
library(gridExtra)
library(grid)
```
## How much finished 100% {#empirical-how-much-finished}
## Introduction {#empirical-intro}
FROC data consists of mark-rating pairs. An important distinction is made between *latent* marks (suspicious regions perceived by the visual system but not necessarily marked) and *actual* marks. A key table (used in later chapters) summarizing FROC notation is introduced which allows unambiguous description of the data.
Empirical plots refer to those generated directly from the data. Empirical operating characteristics (empirical plots) introduced in this chapter are the FROC, the inferred ROC, the alternative FROC (AFROC), the weighted AFROC (wAFROC), the AFROC1 and the wAFROC1. Formulae for coordinates of each plot are given in terms of the underlying mark-rating data.
Plots are *visual* depictions of performance. Scalar measures derived from plots can serve as *quantitative* measures of performance. Empirical area under curve (AUC) measures associated with all plots are illustrated with a small FROC dataset. Except for the FROC plot all of the other plots include a straight line extension from the uppermost observed operating point to (1,1).
If one ignores localization information and simply considers the highest rating on each case as representing its ROC rating, one can define the empirical ROC plot and associated area measure ROC-AUC from FROC data. Since ROC-AUC is a fundamental measure of classification accuracy between non-diseased and diseased cases any other proposed area measure that does not ignore location information should, if it is to be useful, correlate with ROC-AUC. These correlations are explored using the small dataset and it is shown that FROC-AUC is a poor measure of performance. While ways of circumventing FROC-AUC have been proposed and have been used by some investigators none are satisfactory and the claim of this book is that **the FROC should never be used to quantify performance**. The basic reason is simple: unlike all of the other plots defined in this chapter the FROC plot is not constrained to lie within the unit square and the area under a straight line extension to (1,1) is meaningless.
Some of the other empirical plots and AUCs are less familiar as compared to the well-known ROC plots and ROC-AUC. As an aid to understanding them I have included numerical ("hand") calculations of the empirical plots and AUCs for the small dataset. The calculations also illustrate the advantage of using *weighted* versions implemented in some of the empirical plots (lesion weights are a way of allowing one to model the clinical importance (i.e., morbidity/mortality) associated with different type of lesions present in a clinical dataset; a weighted plot assures that each case gets the same importance in determining AUC regardless of the number of lesions in it).
Computing the AUCs from plots can be tedious at best; computational formulae are needed which would allow any of the AUCs to be calculated directly from the FROC ratings. Appendix 1 proves a formula for the wAFROC-AUC, Appendix 2 provides a physical interpretation of the area under the straight line extension for this plot. Appendix 3 summarizes, without proofs, the computational formulae for AUCs for all plots introduced in this chapter.
## FROC data and notation {#empirical-mark-rating-pairs}
### LLs vs. NLs
Each mark indicates the location of a region suspicious enough to warrant reporting and the rating is the associated confidence level. A mark is recorded as a *lesion localization* (LL) if it is sufficiently close to a true lesion and otherwise it is recorded as a *non-lesion localization* (NL).
In an FROC study the number of marks on a case is an a-priori unknown non-negative random integer. It is incorrect and naive to estimate it by dividing the anatomically-relevant image area by the lesion area because not all regions of the image are equally likely to have lesions, lesions do not have the same size, and perhaps most important, radiologists don't assign equal attention units to all areas of the image ^[Currently the best insight into the numbers and locations of marks per case is obtained from eye-tracking studies [@duchowski2017eye], but the information is incomplete as eye-tracking studies can only measure *foveal* gaze and not lesions found by *peripheral* vision. Moreover, such studies are near impossible to conduct in a clinical setting (at least with the eye-tracking apparatus that I am familiar with).].
### Latent vs. actual marks
To distinguish between suspicious regions that were considered for marking but not necessarily marked and regions that were actually marked, it is necessary to introduce the distinction between *latent* marks and *actual* marks.
- A *latent* mark is defined as a suspicious region, regardless of whether or not it was marked. A latent mark becomes an *actual* mark if it is marked.
- A latent mark is a latent LL if it is close to a true lesion and otherwise it is a latent NL.
- A non-diseased case can only have latent NLs. A diseased case can have latent NLs and latent LLs.
- If marked a latent NL is recorded as an actual NL.
- If not marked a latent NL is an *unobservable event*. This is an important point.
- In contrast unmarked lesions are observable events -- one knows (trivially) which lesions were not marked.
### z-samples vs. ratings
z-samples are conceptual quantities that can range from $-\infty$ to $+\infty$. Ratings are observed values typically collected as integers but any ordered set of values will do where larger values correspond to greater suspicion for disease. The conversion from z-samples to ratings is accomplished by adopting a binning rule.
### Binning rule
Recall that ROC data modeling requires the existence of a *case-dependent* decision variable, or z-sample $z$, and case-independent decision thresholds $\zeta_r$, where $r = 0, 1, ..., R_{ROC}-1$, where $R_{ROC}$ is the number of ROC study bins ^[The subscript is used to make explicit the paradigm used as otherwise it leads to confusion.] and a *binning rule* that if $\zeta_r \leq z < \zeta_{r+1}$ the case is rated $r + 1$. Dummy cutoffs are defined as $\zeta_0 = -\infty$ and $\zeta_{R_{ROC}} = \infty$. The z-sample applies to the whole case. To summarize:
\begin{equation}
\left.
\begin{aligned}
\text{if} \left (\zeta_r \le z < \zeta_{r+1} \right )\Rightarrow \text {rating} = r+1\\
r = 0, 1, ..., R_{ROC}-1\\
\zeta_0 = -\infty\\
\zeta_{R_{ROC}} = \infty\\
\end{aligned}
\right \}
(\#eq:binning-rule-roc)
\end{equation}
Analogously, FROC data modeling requires the existence of a *case and location dependent* z-sample for each latent mark and *case and location independent* reporting thresholds $\zeta_r$, where $r = 1, ..., R_{FROC}$ and $R_{FROC}$ is the number of FROC study bins, and the binning rule that a latent mark is marked and rated $r$ if $\zeta_r \leq z < \zeta_{r+1}$. Dummy cutoffs are defined as $\zeta_0 = -\infty$ and $\zeta_{R_{FROC}+1} = \infty$. For the same numbers of non-dummy cutoffs, the number of FROC bins is one less than the number of ROC bins. For example, 4 non-dummy cutoffs $\zeta_1, \zeta_2, \zeta_3, \zeta_4$ can correspond to a 5-rating ROC study or to a 4-rating FROC study. To summarize:
\begin{equation}
\left.
\begin{aligned}
\text{if} \left (\zeta_r \le z < \zeta_{r+1} \right )\Rightarrow \text {rating} = r\\
r = 1, 2, ..., R_{FROC}\\
\zeta_0 = -\infty\\
\zeta_{R_{FROC}+1} = \infty\\
\end{aligned}
\right \}
(\#eq:binning-rule-froc)
\end{equation}
### Notation {#empirical-notation}
*Clear notation is vital to understanding this paradigm.* The notation needs to account for case and location dependencies of ratings and the distinction between case-level and location-level ground truths. *The notation also has to account for cases with no marks.*
FROC notation is summarized in Table \@ref(tab:empirical-notation) in which "marks" refer to "latent marks". The first column is the row number, the second column has the symbol(s), and the third column has the meaning(s) of the symbol(s).
```{r empirical-notation, echo=FALSE}
frocNotation = array(dim = c(17,3))
frocNotation[1,] <- c("1", "$t$", "Case-level truth: 1 non-diseased, 2 diseased case")
frocNotation[2,] <- c("2", "$K_t$", "Number of cases with case-level truth $t$")
frocNotation[3,] <- c("3", "$k_t t$", "Case $k_t$ in case-level truth $t$")
frocNotation[4,] <- c("4", "$s$", "Location-level truth: 1 for NL and 2 for LL")
frocNotation[5,] <- c("5", "$l_s s$", "Mark $l_s$ in location-level truth $s$")
frocNotation[6,] <- c("6", "$N_{k_t t}$", "Number of NLs in case $k_t t$")
frocNotation[7,] <- c("7", "$L_{k_2 2}$", "Number of lesions in case $k_2 2$")
frocNotation[8,] <- c("8", "$z_{k_t t l_1 1}$", "$z$-sample for case $k_t t$ and NL mark $l_1 1$")
frocNotation[9,] <- c("9", "$z_{k_2 2 l_2 2}$", "$z$-sample for case $k_2 2$ and LL mark $l_2 2$")
frocNotation[10,] <- c("10", "$r_{k_t t l_s s}$", "rating for case $k_t t$ and LL/NL mark $l_s s$")
frocNotation[11,] <- c("11", "$R_{FROC}$", "Number of FROC bins")
frocNotation[12,] <- c("12", "$\\zeta_1$", "Lowest non-dummy reporting threshold")
frocNotation[13,] <- c("13", "$\\zeta_r$", "$r$ = 2, 3, ..., non-dummy reporting thresholds")
frocNotation[14,] <- c("14", "$\\zeta_0, \\zeta_{R_{FROC}+1}$", "Dummy thresholds, negative and positive infinity")
frocNotation[15,] <- c("15", "$W_{k_2 l_2}$", "Weight of lesion $l_2 2$ in case $k_2 2$, explained later")
frocNotation[16,] <- c("16", "$L_{max}$", "Maximum number of lesions per case in dataset")
frocNotation[17,] <- c("17", "$L_T$", "Total number of lesions in dataset")
df <- as.data.frame(frocNotation)
colnames(df) <- c("Row", "Symbol", "Meaning")
knitr::kable(df, caption = "FROC notation; all marks refer to latent marks.", escape = FALSE)
```
### Comments
- Row 1: The case-truth index $t$ refers to the case (or patient), with $t = 1$ for non-diseased and $t = 2$ for diseased cases. As a useful mnemonic, $t$ is for *truth*.
- Row 2: $K_t$ is the number of cases with truth state $t$; specifically, $K_1$ is the number of non-diseased cases and $K_2$ the number of diseased cases.
- Row 3: Two indices $k_t t$ are needed to select case $k_t$ in truth state $t$. As a useful mnemonic, $k$ is for *case*.
- Row 4: $s$ location-level truth state: 1 for non-diseased region (NL) and 2 for lesion (LL).
- Row 5: Similar to row 3, two indices $l_s s$ are needed to select latent mark $l_s$ in location-level truth state $s$. As a useful mnemonic, $l$ is for *location*.
- Row 6: $N_{k_t t}$ is the total number of latent NL marks in case $k_t t$. Latent NL marks are possible on non-diseased and diseased cases (i.e., both values of $t$ are allowed).
- Row 7: $L_{k_2 2}$ is the number of lesions in diseased case $k_2 2$.
- Row 8: The z-sample for case $k_t t$ and NL mark $l_1 1$ is denoted $z_{k_t t l_1 1}$. The range of a z-sample is $-\infty < z_{k_t t l_1 1} < \infty$, provided $l_1 \neq \varnothing$; otherwise, it is an unobservable event.
- Row 9: The z-sample of a latent LL is $z_{k_2 2 l_2 2}$. Unmarked lesions are observable events assigned negative infinity ratings (the null-set notation is unnecessary).
- Row 10: The rating of a mark is $r_{k_2 2 l_2 2}$. Unmarked NLs are unobservable events. Unmarked lesions are assigned negative infinity ratings.
- Row 11: $R_{FROC}$ is the number of bins in the FROC study.
- Rows 12, 13 and 14: The cutoffs in the FROC study. The lowest threshold is $\zeta_1$. The other non-dummy thresholds are $\zeta_r$ where $r=2,3,...,R_{FROC}$. The dummy thresholds are $\zeta_0 = -\infty$ and $\zeta_{R_{FROC}+1} = \infty$.
- Row 15: $W_{k_2 l_2}$ is the weight (i.e., clinical importance) of lesion $l_2 2$ in diseased case $k_2 2$. The weights of lesions in a case sum to unity: $\sum_{l_2 = 1}^{L_{k_2 2}}W_{k_2 l_2} = 1$.
- Row 16: $L_{max}$ is the maximum number of lesions per case in the dataset.
- Row 17: $L_T$ is the total number of lesions in the dataset.
### A conceptual and notatonal issue {#empirical-indexing-marks}
An aspect of FROC data, *that there could be cases with no NL marks, no matter how low the reporting threshold*, has created problems both from conceptual and notational viewpoints.
Taking the conceptual issue first, my thinking (prior to 2004) was that as the reporting threshold $\zeta_1$ is lowered, the number of NL marks per case increases almost indefinitely. I visualized this process as each case "filling up" with NL marks [^empirical1-1]. In fact the first model of FROC data [@chakraborty1989maximum] predicts that as the reporting threshold is lowered to $\zeta_1 = -\infty$, the number of NL marks per case approaches $\infty$. However, actual FROC datasets do not agree with this thinking. This is one reason I introduced the radiological search model (RSM) [@chakraborty2006search]. I will have more to say about this in Chapter \@ref(rsm), but for now I state one assumption of the RSM: the number of latent NL marks is a Poisson distributed random integer with a finite value for the mean parameter of the distribution. This means that the actual number of latent NL marks per case can be 0, 1, 2, .., whose average (over all cases) is a finite number. It is highly unlikely that any case will have an infinite number of NLs.
With this background, let us return to the conceptual issue: why does the observer not keep "filling-up" the image with NL marks? The answer is that *the observer can only mark regions that have a non-zero chance of being a lesion*. For example, if the actual number of latent NLs on a particular case is 2, then, as the reporting threshold is lowered, the observer will make at most two NL marks. Having exhausted these two regions the observer will not mark any more regions because there are no more regions to be marked - *all other regions in the image have, in the perception of the observer, zero chance of being a lesion*.
[^empirical1-1]: I expected the number of NL marks per image to be limited only by the ratio of image size to lesion size, i.e., larger values for smaller lesions.
The notational issue is how to handle cases with no latent NL marks. Basically it involves restricting summations over cases to those cases which have at least one latent NL mark, i.e., $N_{k_t t} > 0$, as in the following:
* $l_1 = \{1, 2, ..., N_{k_t t}\}$ indexes latent NL marks, provided the case has at least one latent NL mark; otherwise $N_{k_t t} = 0$ and $l_1 = \varnothing$, the null set. The possible values of $l_1$ are $l_1 = \left \{ \varnothing \right \}\oplus \left \{ 1,2,...N_{k_t t} \right \}$. The null set applies when the case has no latent NL marks and $\oplus$ is the "exclusive-or" symbol ("exclusive-or" is used in the English sense: "one or the other, but not neither nor both").
* $l_2 = \left \{ 1,2,...,L_{k_2 2} \right \}$ indexes latent LL marks. Unmarked LLs are assigned negative infinity ratings as these are observable events. The null set notation is not needed because for every diseased case $L_{k_2 2} > 0$.
## The FROC plot {#empirical-froc-plot-1}
Definitions:
>
- $NLF_r \equiv NLF(\zeta_r)$ = cumulated NL counts with z-sample $\geq$ threshold $\zeta_r$ divided by total number of cases.
- $LLF_r \equiv LLF(\zeta_r)$ = cumulated LL counts with z-sample $\geq$ threshold $\zeta_r$ divided by total number of lesions.
Definitions:
>
The empirical FROC plot connects adjacent operating points $\left (\text{NLF}_r, \text{LLF}_r \right )$, including the origin (0,0) and the observed end-point, with straight lines. The area under this plot is the empirical FROC AUC, denoted $A_{\text{FROC}}$. **Warning: this is a particularly dangerous figure of merit, as will shortly become clear.**
Using the notation of Table \@ref(tab:empirical-notation) and assuming binned data[^empirical1-2] and $n(x)$ denotes the number of events $x$:
[^empirical1-2]: This is not a limiting assumption: if the data is continuous, for finite numbers of cases, no ordering information is lost if the number of ratings is chosen large enough.
\begin{equation}
\text{NLF}_r = \frac{n\left ( \text{NLs rated} \geq \zeta_r\right )}{K_1 + K_2}
(\#eq:empirical-NLF1)
\end{equation}
and
\begin{equation}
\text{LLF}_r = \frac{n\left ( \text{LLs rated} \geq \zeta_r\right )}{L_T}
(\#eq:empirical-LLF1)
\end{equation}
The allowed values of $r$ are:
\begin{equation}
r = 1, 2, ...,R_{FROC}
(\#eq:empirical-range-r)
\end{equation}
Due to the ordering of the thresholds, i.e., $\zeta_1 < \zeta_2 ... < \zeta_{R_{FROC}}$, higher values of $r$ correspond to lower operating points. The uppermost operating point, i.e., that defined by $r = 1$, is referred to the as the *observed end-point*.
Equations \@ref(eq:empirical-NLF1) and \@ref(eq:empirical-LLF1) are equivalent to:
\begin{equation}
\text{NLF}_r = \frac{1}{K_1+K_2} \sum_{t=1}^{2} \sum_{k_t=1}^{K_t} \mathbb{I} \left ( N_{k_t t} > 0 \right )\sum_{l_1=1}^{N_{k_t t}} \mathbb{I} \left ( z_{k_t t l_1 1} \geq \zeta_r \right )
(\#eq:empirical-NLFr)
\end{equation}
and
\begin{equation}
\text{LLF}_r = \frac{1}{L_T} \sum_{k_2=1}^{K_2} \sum_{l_2=1}^{L_{k_2 2}} \mathbb{I} \left ( z_{k_2 2 l_2 2} \geq \zeta_r \right )
(\#eq:empirical-LLFr)
\end{equation}
The indicator function is defined as unity if the argument is true and zero otherwise:
\begin{equation}
\left.
\begin{matrix}
\mathbb{I}\left( \text{True} \right) & = & 1\\
\mathbb{I}\left( \text{False} \right) & = & 0
\end{matrix}
\right \}
(\#eq:empirical-indicator-function)
\end{equation}
In Eqn. \@ref(eq:empirical-NLFr) $\mathbb{I} \left ( N_{k_t t} > 0 \right )$ ensures that *only cases with at least one latent NL* are included in the summation (recall that $N_{k_t t}$ is the total number of latent NLs in case $k_t t$). The term $\mathbb{I} \left ( z_{k_t t l_1 1} \geq \zeta_r \right )$ counts over all NL marks with ratings $\geq \zeta_r$. The right hand side yields the total number of NLs in the dataset with z-samples $\geq \zeta_r$ and dividing by the total number of cases yields $\text{NLF}_r$. This equation also shows explicitly that NLs on both non-diseased ($t=1$) and diseased ($t=2$) cases contribute to NLF.
In Eqn. \@ref(eq:empirical-LLFr) a summation over $t$ is not needed as only diseased cases contribute to LLF. A term like $\mathbb{I} \left ( L_{k_2 2} > 0 \right )$ would be superfluous since $L_{k_2 2} > 0$ as each diseased case must have at least one lesion. The term $\mathbb{I} \left ( z_{k_2 2 l_2 2} \geq \zeta_r \right )$ counts over all LL marks with ratings $\geq \zeta_r$. Dividing by $L_T$, the total number of lesions in the dataset, yields $\text{LLF}_r$.
Since $\zeta_{R_{FROC}+1} = \infty$ according to Eqn. \@ref(eq:empirical-NLFr) and Eqn. \@ref(eq:empirical-LLFr) $r = R_{FROC}+1$ yields the trivial operating point (0,0).
### The observed FROC end-point and its semi-constrained property {#empirical-end-point}
The abscissa of the observed end-point $NLF_1$, is defined by:
\begin{equation}
\text{NLF}_1 = \frac{1}{K_1+K_2} \sum_{t=1}^{2} \sum_{k_t=1}^{K_t} \mathbb{I} \left ( N_{k_t t} > 0 \right ) \sum_{l_1=1}^{N_{k_t t}} \mathbb{I} \left ( z_{k_t t l_1 1} \geq \zeta_1 \right )
(\#eq:empirical-NLF11)
\end{equation}
Since each case could have an arbitrary non-negative number of NLs, $NLF_1$ need not equal unity, except fortuitously.
The ordinate of the observed end-point $LLF_1$, is defined by:
\begin{equation}
\left.
\begin{aligned}
\text{LLF}_1 =& \frac{1}{L_T} \sum_{k_2=1}^{K_2} \sum_{l_2=1}^{L_{k_2 2}} \mathbb{I} \left ( z_{k_2 2 l_2 2} \geq \zeta_1 \right ) \\
\leq& 1
\end{aligned}
\right \}
(\#eq:empirical-LLF1a)
\end{equation}
The numerator is the total number of lesions that were actually marked. The ratio is the fraction of lesions that are marked, which is $\leq 1$.
This is the **semi-constrained property of the observed end-point**, namely, while the *ordinate* is constrained to the range (0,1) the *abscissa* is not.
### Futility of extrapolation outside the observed end-point {#empirical-froc-plot-futility-extrapolation}
To understand this consider the expression for $NLF_0$, i.e., using Eqn. \@ref(eq:empirical-NLFr) with $r = 0$:
\begin{equation}
\text{NLF}_0 = \frac{1}{K_1+K_2} \sum_{t=1}^{2} \sum_{k_t=1}^{K_t} \mathbb{I} \left ( N_{k_t t} > 0 \right ) \sum_{l_1=1}^{N_{k_t t}} \mathbb{I} \left ( z_{k_t t l_1 1} \geq -\infty \right )
\end{equation}
The right hand side of this equation can be separated into two terms, the contribution of latent NLs with z-samples in the range $z \geq \zeta_1$ and those in the range $-\infty \leq z < \zeta_1$. The first term yields the abscissa of the observed end-point, Eqn. \@ref(eq:empirical-NLF11) but the 2nd term cannot be evaluated:
\begin{equation}
\left.
\begin{aligned}
\text{1st term}=&\left (\frac{1}{K_1+K_2} \right )\sum_{t=1}^{2} \sum_{k_t=1}^{K_t} \mathbb{I} \left ( N_{k_t t} > 0 \right ) \sum_{l_1=1}^{N_{k_t t}} \mathbb{I} \left ( z_{k_t t l_1 1} \ge \zeta_1 \right )\\
=&\text{NLF}_1\\
\text{2nd term}=&\left (\frac{1}{K_1+K_2} \right )\sum_{t=1}^{2} \sum_{k_t=1}^{K_t} \mathbb{I} \left ( N_{k_t t} > 0 \right ) \sum_{l_1=1}^{N_{k_t t}} \mathbb{I} \left ( -\infty \leq z_{k_t t l_1 1} < \zeta_1 \right )\\
=&\frac{\text{unknown number}}{K_1+K_2}
\end{aligned}
\right \}
(\#eq:empirical-NLF0a)
\end{equation}
The 2nd term represents the contribution of *unmarked NLs*, i.e., latent NLs whose z-samples were below $\zeta_1$. It determines how much further to the right the observer's NLF would have moved relative to $NLF_1$ *if* one could get the observer to lower the reporting criterion to $-\infty$. *Since the observer may not oblige, this term cannot, in general, be evaluated.* Therefore $NLF_0$ cannot be evaluated. The basic problem is that *unmarked latent NLs represent unobservable events*.
Turning our attention to $LLF_0$:
\begin{equation}
\left.
\begin{aligned}
\text{LLF}_0 =& \frac{ \sum_{k_2=1}^{K_2} \sum_{l_2=1}^{L_{k_2 2}} \mathbb{I} \left ( z_{k_2 2 l_2 2} \geq -\infty \right ) }{L_T}\\
=& 1
\end{aligned}
\right \}
(\#eq:empirical-LLF0)
\end{equation}
Unlike unmarked latent NLs, *unmarked lesions can safely be assigned the $-\infty$ rating, because an unmarked lesion is an observable event*. The right hand side of Eqn. \@ref(eq:empirical-LLF0) evaluates to unity. However, since the corresponding abscissa $NLF_0$ is undefined, one cannot plot this point. It follows that one cannot extrapolate outside the observed end-point.
The above formalism should not obscure the fact that the futility of extrapolation outside the observed end-point of the FROC is obvious for scientific reasons: extrapolating outside the range of the observed data is generally not a good idea.
### Illustration with a dataset {#empirical-froc-plot-illustration}
The following plot uses `dataset04` [@zanca2009evaluation] to illustrate an empirical FROC plot. This dataset has $L_{max} = 3$, $\max{(N_{k_tt}})= 3$ and a 5-point rating scale was employed. The following plot applies to reader 1 in modality (treatment) 1 only. The full dataset has 5 modalities and 4 readers.
```{r, echo=TRUE}
ret <- PlotEmpiricalOperatingCharacteristics(
dataset04,
trts = 1, rdrs = 1, opChType = "FROC")
print(ret$Plot)
```
Shown next are FROC-AUCs for this dataset calculated using the formula in Eqn. \@ref(eq:empirical-computational-froc). All 20 modality-reader combinations are shown.
```{r, echo=TRUE}
auc_froc <- as.data.frame(UtilFigureOfMerit(dataset04, FOM = "FROC"))
print(auc_froc)
```
The value `r auc_froc[1,1]` for `trt1` and `rdr1` is the area under the FROC plot shown above.
```{r, echo=FALSE}
auc_froc <- as.numeric(as.matrix(UtilFigureOfMerit(dataset04, FOM = "FROC")))
```
## The inferred-ROC plot {#empirical-ROC}
By adopting a rule for converting the mark-rating data per case to a single rating per case, and commonly the highest rating rule is used ^[The highest rating method was used in early FROC modeling in [@bunch1977free] and in [@swensson1996unified], the latter in the context of LROC paradigm modeling.], it is possible to infer ROC data from FROC mark-rating data.
### The inferred-ROC z-sample {#empirical-ROC-fpf}
The highest ROC z-sample of a case, denoted $h_{k_t t}$, is the z-sample of the highest rated latent mark on the case or $-\infty$ if the case has no latent marks. For non-diseased cases $t = 1$ the maximum is over all latent NLs on the case. For diseased cases $t = 2$ the maximum is over all latent NLs *and* latent LLs on the case.
When there is little possibility for confusion, the prefix "inferred" is suppressed. ROC z-samples on non-diseased cases are referred to as FP z-samples and those on diseased cases as TP z-samples.
Using the by now familiar cumulation procedure, FP counts are cumulated to calculate FPF and likewise TP counts are cumulated to calculate TPF.
Definitions:
- $FPF(\zeta)$ = cumulated inferred FP counts with $h_{k_1 1} \geq \zeta$ divided by total number of non-diseased cases.
- $TPF(\zeta)$ = cumulated inferred TP counts with $h_{k_2 2} \geq \zeta$ divided by total number of diseased cases.
Definition of ROC plot:
>
- The ROC is the plot of inferred $TPF(\zeta)$ vs. inferred $FPF(\zeta)$.
- *The plot includes a straight line extension from the observed end-point to (1,1)*.
The highest z-sample ROC false positive (FP) z-sample for non-diseased case $k_1 1$ is defined by:
\begin{equation}
\left.
\begin{aligned}
\begin{matrix}
FP_{k_1 1}=&\max_{l_1} \left ( z_{k_1 1 l_1 1 } \right ) & \text{if} & l_1 \neq \varnothing\\
FP_{k_1 1}=&-\infty & \text{if} & l_1 = \varnothing
\end{matrix}
\end{aligned}
\right \}
(\#eq:empirical-FP)
\end{equation}
If the case has at least one latent NL mark, then $l_1 \neq \varnothing$, where $\varnothing$ is the null set, and the first definition applies. If the case has no latent NL marks, then $l_1 = \varnothing$, and the second definition applies. $FP_{k_1 1}$ is the maximum z-sample over all latent marks occurring on non-diseased case $k_1 1$, or $-\infty$ if the case has no latent marks (this is allowed because a non-diseased case with no marks is an observable event). The corresponding false positive fraction is defined by:
\begin{equation}
\text{FPF}_r \equiv \text{FPF} \left ( \zeta_r \right ) = \frac{1}{K_1} \sum_{k_1=1}^{K_1} \mathbb{I} \left ( FP_{k_1 1} \geq \zeta_r\right )
(\#eq:empirical-fpf)
\end{equation}
### Inferred TPF {#empirical-ROC-tpf}
The inferred true positive (TP) z-sample for diseased case $k_2 2$ is defined by one of the following three equations, as explained below:
\begin{equation}
\begin{matrix}
TP_{k_2 2} = \max_{l_1 l_2}\left ( z_{k_2 2 l_1 1} ,z_{k_2 2 l_2 2} \right ) & \text{if} & l_1 \neq \varnothing
\end{matrix}
(\#eq:empirical-TP1)
\end{equation}
or
\begin{equation}
\begin{matrix}
TP_{k_2 2} = \max_{l_2} \left ( z_{k_2 2 l_2 2} \right )
& \text{if} & \left( l_1 = \varnothing \right) \land \left (\max_{l_2}{\left (z_{k_2 2 l_2 2} \right )} > -\infty \right )
\end{matrix}
(\#eq:empirical-TP2)
\end{equation}
or
\begin{equation}
\begin{matrix}
TP_{k_2 2} = -\infty
& \text{if} & \left ( l_1 = \varnothing \land\left ( \max_{l_2}{\left (z_{k_2 2 l_2 2} \right )} = -\infty \right ) \right )
\end{matrix}
(\#eq:empirical-TP3)
\end{equation}
Here $\land$ is the logical AND operator. An explanation is in order. Consider Eqn. \@ref(eq:empirical-TP1). There are two z-samples inside the $\max$ operator: $z_{k_2 2 l_1 1} ,z_{k_2 2 l_2 2}$. The first z-sample is from a NL on a diseased case, as per the $l_1 1$ subscripts, while the second is from a LL on the same diseased case, as per the $l_2 2$ subscripts.
- If $l_1 \neq \varnothing$ then Eqn. \@ref(eq:empirical-TP1) applies, i.e., one takes the maximum over all z-samples, NLs and LLs, whichever is higher, on the diseased case.
- If $l_1 = \varnothing$ and at least one lesion is marked, then Eqn. \@ref(eq:empirical-TP2) applies, i.e., one takes the maximum z-sample over all marked LLs.
- If $l_1 = \varnothing$ and no lesions are marked, then Eqn. \@ref(eq:empirical-TP3) applies; this represents an unmarked diseased case; the $-\infty$ z-sample assignment is justified because an unmarked diseased case is an observable event.
The inferred true positive fraction $\text{TPF}_r$ is defined by:
\begin{equation}
\text{TPF}_r \equiv \text{TPF}(\zeta_r) = \frac{1}{K_2}\sum_{k_2=1}^{K_2} \mathbb{I}\left ( TP_{k_2 2} \geq \zeta_r \right )
(\#eq:empirical-TPF)
\end{equation}
### The empirical ROC plot and AUC {#empirical-definition-empirical-auc-roc}
Definitions:
>
The inferred empirical ROC plot connects adjacent points $\left( \text{FPF}_r, \text{TPF}_r \right )$, including the origin (0,0), with straight lines plus a straight-line segment connecting the observed end-point to (1,1). Like a real ROC, this plot is constrained to lie within the unit square. The area under this plot is the empirical inferred ROC AUC, denoted $A_{\text{ROC}}$.
### The observed end-point of the ROC and its constrained property {#empirical-ROC-constrained}
The abscissa of the observed end-point $FPF_1$, is defined by:
\begin{equation}
\text{FPF}_1 \equiv \text{FPF} \left ( \zeta_1 \right ) = \frac{1}{K_1} \sum_{k_1=1}^{K_1} \mathbb{I} \left ( FP_{k_1 1} \geq \zeta_1 \right )
(\#eq:empirical-fpf-repeat)
\end{equation}
Since each case gets a single FP z-sample, and only unmarked cases get the $-\infty$ z-sample, $\text{FPF}_1 \leq 1$.
The ordinate of the observed end-point $TPF_1$, is defined by:
\begin{equation}
\text{TPF}_1 \equiv \text{TPF}(\zeta_1) = \frac{1}{K_2}\sum_{k_2=1}^{K_2} \mathbb{I}\left ( TP_{k_2 2} \geq \zeta_1 \right )
(\#eq:empirical-TPF-repeat)
\end{equation}
Since each case gets a single TP z-sample, and only unmarked cases get the $-\infty$ z-sample, $\text{TPF}_1 \leq 1$.
It follows that the observed end-point of the ROC (as is well known) satisfies the constrained end-point property: it lies below-left the (1,1) corner of the plot.
>
The upper-right corner (reached by counting all z-samples $\ge -\infty$) of the ROC plot is not to be confused by the observed end-point (reached by counting all z-samples $\ge \zeta_1$).
### Illustration with a dataset {#empirical-roc-plot-illustration}
The following code uses `dataset04` to illustrate an empirical ROC plot for treatment 1 and reader 1. The reader should experiment by running `PlotEmpiricalOperatingCharacteristics(dataset04, trts = 1, rdrs = 1, opChType = ROC")$Plot` with different treatments `trts` and readers `rdrs` specified.
```{r, echo=TRUE}
ret <- PlotEmpiricalOperatingCharacteristics(
dataset04,
trts = 1, rdrs = 1, opChType = "ROC")
print(ret$Plot)
```
Shown next is calculation of the figure of merit for this dataset. Note that in function `UtilFigureOfMerit()` the `FOM` argument has to be set to `HrAuc`, for highest rating AUC.].
```{r, echo=TRUE}
UtilFigureOfMerit(dataset04, FOM = "HrAuc")
```
```{r, echo=FALSE}
auc_HrAuc <- as.numeric(as.matrix(UtilFigureOfMerit(dataset04, FOM = "HrAuc")))
```
## The alternative FROC (AFROC) plot {#empirical-AFROC}
- Fig. 4 in [@bunch1977free] anticipated another way of visualizing FROC data. I subsequently termed this the *alternative FROC (AFROC)* plot [@chakraborty1989maximum].
- The empirical AFROC is defined as the plot of $\text{LLF}(\zeta_r)$ along the ordinate vs. $\text{FPF}(\zeta_r)$ along the abscissa.
- $\text{LLF}_r \equiv \text{LLF}(\zeta_r)$, the ordinate of the FROC plot, was defined in Eqn. \@ref(eq:empirical-LLFr).
- $\text{FPF}_r \equiv \text{FPF}(\zeta_r)$, the abscissa of the ROC plot, was defined in Eqn. \@ref(eq:empirical-fpf).
### Definition: empirical AFROC plot and AUC {#empirical-definition-empirical-auc-afroc}
The empirical AFROC plot connects adjacent operating points $\left( \text{FPF}_r, \text{LLF}_r \right )$, including the origin (0,0) and (1,1), with straight lines. The area under this plot is the empirical AFROC AUC, denoted $A_{\text{AFROC}}$.
Key points:
- The ordinates (LLF) of the FROC and AFROC are identical.
- The abscissa (FPF) of the ROC and AFROC are identical.
- The AFROC is a hybrid plot incorporating aspects of both ROC and FROC plots.
- The AFROC is constrained to within the unit square.
>
Prof. Richard Swensson did not like my choice of the word "alternative" in naming this operating characteristic. I had no idea in 1989 how important this plot would later turn out to be, otherwise a more meaningful name might have been proposed. To anticipate the central message of this book, the AUC based on this plot (and weighted versions of it introduced below), are superior to the FROC-AUC and the ROC-AUC in terms of statistical power and reliability (the FROC-AUC is especially unreliable).
### The observed end-point of the AFROC and its constrained property {#empirical-AFROC-constrained}
According to Eqn. \@ref(eq:empirical-fpf) the abscissa of the observed end-point $FPF_1 \leq 1$ and according to Eqn. \@ref(eq:empirical-LLF1a) the ordinate of the observed end-point $\text{LLF}_1 \leq 1$. It follows that the observed end-point of the AFROC satisfies the constrained end-point property, i.e., it lies below-left the (1,1) corner of the plot.
### Illustration with a dataset {#empirical-afroc-plot-illustration}
The following code uses `dataset04` to illustrate an empirical AFROC plot for treatment 1 and reader 1.
```{r, echo=TRUE}
ret <- PlotEmpiricalOperatingCharacteristics(
dataset04,
trts = 1, rdrs = 1, opChType = "AFROC")
print(ret$Plot)
```
Shown next are the figures of merit for this dataset for all treatment reader combinations.
```{r, echo=TRUE}
UtilFigureOfMerit(dataset04, FOM = "AFROC")
```
```{r, echo=FALSE}
auc_afroc <- as.numeric(as.matrix(UtilFigureOfMerit(dataset04, FOM = "AFROC")))
```
## The weighted-AFROC plot (wAFROC) plot {#empirical-wAFROC}
The AFROC ordinate defined in Eqn. \@ref(eq:empirical-LLFr) gives equal importance to every lesion in a case. A case with more lesions will have more influence on the AFROC (see next section for an explicit demonstration of this fact). This is undesirable since each case (i.e., patient) should get equal importance in the analysis -- as with ROC analysis, one wishes to draw conclusions about the population of cases and each case is an equally valid sample from the population. In particular, one does not want the analysis to be skewed towards cases with greater numbers of lesions. [^empirical1-5]
[^empirical1-5]: Historical note: I became aware of how serious this issue could be when a researcher contacted me about using FROC methodology for nuclear medicine bone scan images, where the number of lesions on diseased cases can vary from a few to a hundred!
Another issue is that the AFROC assigns equal *clinical* importance to each lesion in a case. Lesion weights were introduced [@RN1385] to allow for the possibility that the clinical importance of finding a lesion might be lesion-dependent [@RN1966]. For example, it is possible that a diseased cases has lesions of two types with differing clinical importance; the figure-of-merit should give more credit to finding the more clinically important one. Clinical importance could be defined as the mortality associated with the specific lesion type; these can be obtained from epidemiological studies [@desantis2011breast].
Let $W_{k_2 l_2} \geq 0$ denote the *weight* (i.e., short for clinical importance) of lesion $l_2$ in diseased case $k_2$ (since weights are only applicable to diseased cases one can, without ambiguity, drop the case-level and location-level truth subscripts, i.e., the notation $W_{k_2 2 l_2 2}$ would be superfluous). For each diseased case $k_2 2$ the weights are subject to the constraint:
\begin{equation}
\sum_{l_2 =1 }^{L_{k_2 2}} W_{k_2 l_2} = 1
(\#eq:empirical-weights-constraint)
\end{equation}
The weighted lesion localization fraction $\text{wLLF}_r$ is defined by [@RN2484]:
\begin{equation}
\text{wLLF}_r \equiv \text{wLLF}\left ( \zeta_r \right ) = \frac{1}{K_2}\sum_{k_2=1}^{K_2}\sum_{l_2=1}^{L_{k_2 2}}W_{k_2 l_2} \mathbb{I}\left ( z_{k_2 2 l_2 2} \geq \zeta_r \right )
(\#eq:empirical-wLLFr)
\end{equation}
### The empirical wAFROC plot and AUC {#empirical-definition-empirical-auc-wafroc}
>
The empirical wAFROC plot connects adjacent operating points $\left ( \text{FPF}_r, \text{wLLF}_r \right )$, including the origin (0,0), with straight lines plus a straight-line segment connecting the observed end-point to (1,1). The area under this plot is the empirical weighted-AFROC AUC, denoted $A_{\text{wAFROC}}$.
### Illustration with a dataset {#empirical-wafroc-plot-illustration}
The following code uses `dataset04` to illustrate an empirical ROC plot for treatment 1 and reader 1.
```{r, echo=TRUE}
ret <- PlotEmpiricalOperatingCharacteristics(
dataset04, trts = 1, rdrs = 1, opChType = "wAFROC")
print(ret$Plot)
```
Shown next is calculation of the figure of merit for this dataset.
```{r, echo=TRUE}
UtilFigureOfMerit(dataset04, FOM = "wAFROC")
```
```{r, echo=FALSE}
auc_wafroc <- as.numeric(as.matrix(UtilFigureOfMerit(dataset04, FOM = "wAFROC")))
```
## AFROC vs. wAFROC {#empirical-numerical-understanding}
The fact that the wAFROC gives equal importance to each diseased case while the AFROC gives more importance to diseased cases with more lesions can be illustrated with a fictitious small dataset consisting of $K_1 = 4$ non-diseased and $K_2 = 5$ diseased cases. The maximum number of NLs per case is two and the maximum number of lesions per case is three. The first two diseased cases have one lesion each, the third and fourth have two lesions each and the fifth has 3 lesions. Here is how we code the NL and LL z-samples (`t()` is the `R` transpose operator). The negative infinities represent unmarked locations. For example, the first non-diseased case has no NL marks, the second has one mark rated 0.5, etc., and the first diseased case has one NL mark rated 1.5, etc. The first lesion in the LL array was rated 0.9. the second was rated -0.2, ..., and the 3 lesions in the fifth diseased case were rated 1, 2.5 and 1, respectively.
```{r empirical-numerical, echo = T}
NL <- t(array(c(-Inf, -Inf,
0.5, -Inf,
0.7, 0.6,
-0.3, -Inf,
1.5, -Inf,
-Inf, -Inf,
-Inf, -Inf,
-Inf, -Inf,
- Inf, -Inf), dim = c(2,9)))
LL <- t(array(c(0.9, -Inf, -Inf,
-0.2, -Inf,-Inf,
1.6, -Inf, -Inf,
3, 2, -Inf,
1, 2.5, 1), dim = c(3,5)))
```
The z-samples are converted to a dataset `frocData` as shown next:
```{r empirical-numerical1a, echo = T}
frocData <- Df2RJafrocDataset(NL, LL, perCase = c(1,1,2,2,3))
```
In the above code `perCase = c(1,1,2,2,3)` specifies the number of lesions per case: 1 in the first diseased case, 1 in the second, 2 in the third, ..., and 3 in the fifth. The function `Df2RJafrocDataset()` generates the dataset object.
The lesion weights are specified in the following lines.
```{r empirical-numerical1c, echo = T}
frocData$lesions$weights[3,] <- c(0.1, 0.9, -Inf)
frocData$lesions$weights[4,] <- c(0.9, 0.1, -Inf)
frocData$lesions$weights[5,] <- c(0.3, 0.4, 0.3)
```
The first and second diseased cases, which have only one lesion each, are assigned unit weights by default. The first lesion in the third diseased case has weight 0.1 and the second has weight 0.9 -- notice that the weights sum to unity. The fourth diseased cases has the lesion weights reversed, 0.9 and 0.1. The three lesions in the fifth diseased case are assigned weights 0.3. 0.4 and 0.3.
```{r empirical-numerical1b, echo = F}
K1 <- 4
K2 <- 5
FP <- apply(frocData$ratings$NL, 3, max)
FP <- FP[1:K1]
afrocPlot <- PlotEmpiricalOperatingCharacteristics(
frocData,
trts = 1,
rdrs = 1,
opChType = "AFROC",
legend.position = "NULL")
afrocPlot <- afrocPlot$Plot + ggtitle("A")
wafrocPlot <- PlotEmpiricalOperatingCharacteristics(
frocData,
trts = 1,
rdrs = 1,
opChType = "wAFROC",
legend.position = "NULL")
wafrocPlot <- wafrocPlot$Plot + ggtitle("B")
FPF <- afrocPlot$data$genAbscissa
LLF <- afrocPlot$data$genOrdinate
wLLF <- wafrocPlot$data$genOrdinate
```
### NL and LL z-samples
Shown next is the `NL` z-samples array; it has 9 rows, corresponding to the total number of cases (the first four correspond to non-diseased cases and the rest to diseased cases) and 2 columns, corresponding to the maximum number of NLs per case.
```{r, echo=FALSE}
cat("NL z-samples:\n")
NL
```
Shown next is the `LL` z-samples array; it has 5 rows, corresponding to the total number of diseased cases, and 3 columns, corresponding to the maximum number of LLs per case:
```{r, echo=FALSE}
cat("LL z-samples:\n")
LL
```
### Lesion weights
Show next is the lesion weights array:
```{r, echo=FALSE}
cat("lesion weights:\n")
frocData$lesions$weights
```
The negative infinities represent missing values.
### FPF
Shown next is the `FP` z-samples array. Since FPs are only possible on non-diseased cases, this is a length 4 row-vector. Each value is the maximum of the two `NL` z-samples for the corresponding non-diseased case. As an example, for case #3 the maximum of the two `NL` values is 0.7.
```{r, echo=FALSE}
cat("FP z-samples:\n")
FP
```
Here are the sorted `FP` z-samples.
```{r, echo=FALSE}
sort(FP)
```
The sorting makes it easy to construct the `FPF` values, shown next.
```{r, echo=FALSE}
cat("FPF values:\n")
for (i in 1:length(FPF)) {
cat(sprintf (" %.3f", FPF[i]))
}
cat("\n")
```
The first non-zero `FPF` value is $0.25 = 1/4$, which occurs when a conceptual sliding threshold is lowered past the highest `FP` value, namely 0.7. (The 0.25 comes from 1 `FP` case divided by 4 non-diseased cases.) The next `FPF` value is $0.5 = 2/4$, which occurs when the sliding threshold is lowered past the next-highest `FP` value, namely 0.5. The next `FPF` value is 0.75 and the last `FPF` value is unity.
### LLF
Here are the sorted `LL` z-samples.
```{r, echo=FALSE}
sort(LL)
```
The `LLF` values are shown next.
```{r, echo=FALSE}
cat("LLF values:\n")
for (i in 1:length(LLF)) {
cat(sprintf (" %.3f", LLF[i]))
}
cat("\n")
```
The first non-zero `LLF` value is 0.111, which occurs when the sliding threshold is lowered past the highest `LL` value, namely 3. The 0.111 comes from 1 LL divided by 9, the total number of lesions. The next `LLF` value is 0.222, which occurs when the sliding threshold is lowered past the next-highest `LL` value, namely 2.5 (2/9 = 0.222). The next `LLF` value is 0.333, which occurs when the sliding threshold is lowered past 2 (3/9 = 0.333), and so on.
### wLLF
The sorted `LL` z-samples array and the weights are used to construct the `wLLF` values shown next.
```{r, echo=FALSE}
cat("wLLF values:\n")
for (i in 1:length(LLF)) {
cat(sprintf (" %.3f", wLLF[i]))
}
cat("\n")
```
The first non-zero `wLLF` value is 0.18, which occurs when the sliding threshold is lowered past the highest `LL` value, namely 3. Since this comes from lesion #1 on diseased case #4, whose weight is 0.9, the corresponding incremental vertical jump is $1/5*0.9 = 0.18$, which is also the net `wLLF` value corresponding to the most suspicious lesion crossing the cutoff. Notice that we are dividing by 5, the total number of diseased cases, not 9 as in the `LLF` example.
The next `wLLF` value is 0.26, which occurs when the sliding threshold is lowered past the next-highest `LL` value, namely 2.5, which comes from the 2nd lesion on the fifth diseased case with weight 0.4. The incremental jump in `wLLF` is $1/5*0.4 = 0.08$. The net `wLLF` value corresponding to the two most suspicious lesions crossing the cutoff is $1/5*0.9 + 1/5*0.4 = 0.26$.
The next `wLLF` value is 0.280, which occurs when the sliding threshold is lowered past 1.6, which comes from lesion #1 on diseased case #3, with weight 0.1, and the net `wLLF` value corresponding to the three most suspicious lesions crossing the cutoff is $1/5*0.9 + 1/5*0.4 + 1/5*0.1 = 0.280$, and so on.
The reader should complete these hand-calculations to reproduce all of the `wLLF` values shown above. The values (FPF, LLF and wLLF) defining the AFROC and wAFROC are summarized here:
```{r, echo=FALSE}
x <- data.frame(FPF=FPF, LLF=LLF, wLLF=wLLF)
x
```
This shows that the empirical AFROC is defined by the following 6 operating points: (0,0), (0,0.7777778), (0.5,0.7777778), (0.5,0.8888889), (0.75, 0.8888889) and (1,1). Likewise, the empirical wAFROC is defined by the following 6 operating points: (0,0), (0,0.62), (0.5,62), (0.5,0.82), (0.75, 0.82) and (1,1). In each case one simply connects neighboring points with straight lines.
The hand-calculations also show why the AFROC gives more importance to diseased cases with more lesions while the wAFROC does not.
* Considering the AFROC, diseased case #5 with three lesions which contributes three vertical jumps to LLF totaling $3/9 = 0.333333$ ^[The jumps need not be contiguous: they will be contiguous only if the three lesion z-samples are closely spaced such that they are crossed in succession, in any order, by the sliding virtual threshold; otherwise the jumps will be interspersed by jumps from lesions in other cases.]. This is larger than the contribution to LLF of diseased case #1 with one lesion $1/9 = 0.11111$.
* Considering the wAFROC, the three lesions on diseased case #5 contribute $1/5*0.3 + 1/5*0.4 + 1/5*0.3 = 0.2$ to wLLF, the same as diseased case #1, $1/5*1 = 0.2$.
Shown in Fig. \@ref(fig:plots-afrocPlot-wafrocPlot) are the empirical AFROC and wAFROC plots.
```{r plots-afrocPlot-wafrocPlot, fig.cap="Left: AFROC plot; Right: corresponding wAFROC plot.", fig.show='hold', echo=FALSE}
grid.arrange(afrocPlot, wafrocPlot, ncol = 2)
```
The operating points can be used to numerically calculate the AUCs under the empirical AFROC and wAFROC plots, as done in the following code:
```{r, echo=T}
afroc_auc <- 0.5 * 0.7777778 +
0.25 * 0.8888889 +
0.25 * 0.8888889 + (1 - 0.8888889) * 0.25 /2
wafroc_auc <- 0.5 * 0.62 +
0.25 * 0.82 +
0.25 * 0.82 +
(1 - 0.82) * 0.25 /2
cat("afroc_auc =", afroc_auc,"\n")
cat("wafroc_auc =", wafroc_auc,"\n")
```
The same AUC results are obtained using the function `UtilFigureOfMerit`:
```{r, echo=TRUE}
cat("AFROC AUC = ",
as.numeric(UtilFigureOfMerit(frocData, FOM = "AFROC")),"\n")
cat("wAFROC AUC = ",
as.numeric(UtilFigureOfMerit(frocData, FOM = "wAFROC")),"\n")
```
It is seen that the empirical plots consist of upward and rightward jumps starting from the origin (0,0) and ending at (1,1). Each upward jump is associated with a `LL` z-sample exceeding a virtual threshold. Each rightward jump is associated with a `FP` z-sample exceeding the threshold. Upward jumps tend to increase the area under the AFROC-based plots and rightward jumps tend to decrease it, i.e., correct decisions are rewarded and incorrect ones are penalized. If there are only upward jumps then the empirical plot rises from the origin to (0,1), where all lesions are correctly localized without any generating FPs and performance is perfect -- the straight-line extension of the plot to (1,1) ensures that the net area is unity. If there are only horizontal jumps the operating point moves from the origin to (1,0), where none of the lesions are localized and every non-diseased case has at least one NL mark and despite the straight line extension to (1,1), the net area is zero. This represents worst possible performance.
## Interpretation of AUCs {#empirical-meanings}
>
* The area under the AFROC is the probability that a lesion is rated higher than any mark on a non-diseased case.
* The area under the weighted-AFROC is lesion-weight adjusted probability that a lesion is rated higher than any mark on a non-diseased case.
## Instructive examples {#empirical-instructive-cases}
I am including a few extreme cases that I have found to be instructive. These include chance level performance and observers who do not generate any marks.
### The FROC {#empirical-instructive-cases-FROC}
The chance level FROC is a "flat-liner" hugging the x-axis except for a possible upturn at large NLF. For an observer who does not generate any marks the FROC plot contains but one point, the origin, and $A_{\text{FROC}}=0$.
### The ROC {#empirical-instructive-cases-ROC}
The chance level ROC is the positive diagonal connecting (0,0) to (1,1). There could be several operating points on this diagonal (apart from sampling effects) but $A_{\text{ROC}}=0.5$.
An observer who does not generate any marks the ROC plot consists of two points, the origin and (1,1) and $A_{\text{ROC}}=0.5$.
### The AFROC {#empirical-instructive-cases-AFROC}
#### Chance level performance {#empirical-instructive-cases-AFROC-chance-level}
The chance level AFROC is not the line connecting (0,0) to (1,1). This is a serious misconception that I have encountered. A chance level observer will generate a "flat-liner" but this time the plot ends at (1,0) and the straight line extension will be a vertical line connecting (1,0) to (1,1) and $A_{\text{AFROC}}=0$.
#### Case of no marks {#empirical-empirical-instructive-cases-AFROC-no-marks}
This is a highly interesting and instructive example. The AFROC plot is a straight line connecting (0,0) and (1,1) which could be mistakenly termed as representing chance level performance. This is far from the truth.
>
An expert radiologist successfully screens out non-diseased cases and sees nothing suspicious in any of them – not mistaking variants of normal anatomy for false lesions on non-diseased cases is a sign of expertise. Suppose the lesions on diseased cases are very difficult to see, even for the expert, so the radiologist does not mark any of them in addition to not marking any NLs on diseased cases. **The expert radiologist therefore does not report anything, i.e., generates no marks, and the operating point is "stuck" at the origin (0,0).** Even in this unusual situation, one would be justified in connecting the origin to (1,1) and claiming area under AFROC is 0.5. The extension gives the radiologist credit for not marking any non-diseased case; of course, the radiologist does not get any credit for marking any of the lesions. An even better radiologist, who finds and marks some of the lesions, will score higher, and AFROC-AUC will exceed 0.5.
### The wAFROC {#empirical-instructive-cases-wAFROC}
Similar comments apply to the wAFROC as already described above for AFROC.
## FROC-AUC is a poor measure {#empirical-froc-auc-poor}
Regarding the ROC-AUC, i.e., $A_{\text{ROC}}$, as the gold standard against which all other figures of merit should be compared for consistency in orderings, shown next are plots of $A_{\text{FROC}}$, $A_{\text{AFROC}}$ and $A_{\text{wAFROC}}$ vs. $A_{\text{ROC}}$ for the dataset used in the previous illustrations.
### Plot of FROC AUC vs. ROC AUC
```{r, echo=FALSE}
df <- data.frame(auc_HrAuc = as.vector(auc_HrAuc),
auc_froc = as.vector(auc_froc))
fit <- lm(auc_froc ~ auc_HrAuc, df)
fit1 <- summary(fit)
r2 <- fit1$r.squared
```
The following is the plot of $A_{\text{FROC}}$ vs. $A_{\text{ROC}}$. There are 20 points on the plot corresponding to 5 treatments and 4 readers. The straight line is a least squares fit. Note the poor correlation and negative slope between $A_{\text{FROC}}$ and $A_{\text{ROC}}$, $R^2$ = `r r2`, slope = `r fit$coefficients[2]`.
```{r, echo=FALSE}
p1 <- ggplot(data = df, aes(x = auc_HrAuc, y = auc_froc)) +
geom_smooth(method = "lm",
se = FALSE, color = "black", formula = y ~ x) +
geom_point() +
scale_x_continuous(limits = c(0.75, 0.92)) +
scale_y_continuous(limits = c(0, 0.5)) +
labs(title = "froc vs. roc")
print(p1)
```
The reason should be fairly obvious. The FROC is unconstrained in the NLF direction and the area under the plot *rewards* an observer who generates more NLs, i.e., as the operating point moves further to the right. (The perfect observer whose FROC plot is the vertical line connecting (0,0) and (0,1) is heavily penalized since $A_{\text{FROC}} = 0$ for this observer.) One can try to try to avoid this problem by limiting the area under the FROC to that between $\text{NLF} = 0$ and $\text{NLF} = x$ where $x$ is an arbitrarily chosen fixed value -- indeed the partial area procedure has been used by CAD algorithm designers. Since the choice of $x$ is arbitrary the procedure is subjective. The method would fail for any observer with $\text{NLF}_{max} < x$ as then the partial area is undefined. This forces the algorithm designer to chose $x$ as the minimum of all $\text{NLF}_{max}$ values over all observers and treatments, which would exclude a lot of data and lead to a statistical power penalty.
### Plot of AFROC AUC vs. ROC AUC
```{r, echo=FALSE}
df <- data.frame(auc_HrAuc = as.vector(auc_HrAuc),
auc_afroc = as.vector(auc_afroc))
fit <- lm(auc_afroc ~ auc_HrAuc, df)
fit1 <- summary(fit)