-
Notifications
You must be signed in to change notification settings - Fork 0
/
data.json
11546 lines (11546 loc) · 568 KB
/
data.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
{
"https://aclanthology.org/P17-2067.pdf": {
"annotator_1": {
"narratives": [
{
"type": "vague opposition",
"Model Means": [
"classify/score veracity"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"in this past election cycle for the 45th president of the united states, the world has witnessed a growing epidemic of fake news. the plague of fake news not only poses serious threats to the integrity of journalism, but has also created turmoils in the political world. the worst real-world impact is that fake news seems to create real-life fears: last year, a man carried an ar-15 rifle and walked in a washington dc pizzeria, because he recently read online that \u201cthis pizzeria was harboring young children as sex slaves as part of a childabuse ring led by hillary clinton\u201d"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"anecdote"
]
},
{
"Quotes (what)": [
"vlachos and riedel (2014) are the first to release a public fake news detection and fact-checking dataset, but it only includes 221 statements, which does not permit machine learning based assessments. to address these issues, we introduce the liar dataset, which includes 12,836 short statements labeled for truthfulness, subject, context/venue, speaker, state, party, and prior history"
],
"Model Means": [
"classify/score veracity"
],
"Citation support": [
"previous work"
]
}
]
},
"year": 2017,
"annotator_2": {
"narratives": [
{
"type": "vague opposition",
"Ends": [
"limit misinformation"
],
"Model Means": [
"classify/score veracity"
]
}
],
"quotes": [
{
"Quotes (why)": [
"in this past election cycle for the 45th president of the united states, the world has witnessed a growing epidemic of fake news. the plague of fake news not only poses serious threats to the integrity of journalism, but has also created tur- moils in the political world. the worst real-world impact is that fake news seems to create real-life fears: last year, a man carried an ar-15 rifle and walked in a washington dc pizzeria, because he recently read online that \u201cthis pizzeria was harboring young children as sex slaves as part of a child- abuse ring led by hillary clinton\u201d1. the man was later arrested by police, and he was charged for firing an assault rifle in the restaurant (kang and goldman, 2016)."
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"anecdotal"
]
},
{
"Quotes (what)": [
"the problem of fake news detection is more challenging than detecting deceptive reviews, since the political language on tv interviews, posts on facebook and twitters are mostly short statements. however, the lack of manually labeled fake news dataset is still a bottleneck for advancing computational-intensive, broad- coverage models in this direction. vlachos and riedel (2014) are the first to release a public fake news detection and fact-checking dataset, but it only includes 221 statements, which does not per- mit machine learning based assessments. to address these issues, we introduce the liar dataset, which includes 12,836 short statements labeled for truthfulness, subject, context/venue, speaker, state, party, and prior history."
],
"Citation support": [
"previous research article"
]
},
{
"Model Means": [
"classify/score veracity"
]
}
]
}
},
"https://www.ijcai.org/Proceedings/16/Papers/537.pdf": {
"annotator_1": {
"narratives": [
{
"type": "automated external fact-checking",
"Data Subjects": [
"social media users"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"supplant human fact-checkers"
],
"Citation Support for Narratives": [
"sense of threat"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"false rumors are damaging as they cause public panic and social unrest. for example, on august 25th of 2015, a rumor about \u201cshootouts and kidnappings by drug gangs happening near schools in veracruz\u201d spread through twitter and facebook1. this caused severe chaos in the city involving 26 car crashes, because people left their cars in the middle of a street and rushed to pick up their children from school."
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"anecdote"
]
},
{
"Quotes (what)": [
"this incident of a false rumor highlights that automatically predicting the veracity of information on social media is of high practical value."
],
"Data Subjects": [
"social media users"
],
"Model Means": [
"classify/score veracity"
]
},
{
"Quotes (what)": [
"debunking rumors at an early stage of diffusion is particularly crucial to minimizing their harmful effects. to distin- guish rumors from factual events, individuals and organiza- tions often have relied on common sense and investigative journalism. rumor reporting websites like snopes.com and factcheck.org are such collaborative efforts. however, be- cause manual verification steps are involved in such efforts, these websites are not comprehensive in their topical cover- age and also can have long debunking delay."
],
"Application Means": [
"supplant human fact-checkers"
],
"Ends": [
"limit misinformation"
]
}
]
},
"year": 2016,
"annotator_2": {
"narratives": [
{
"Data Subjects": [
"social media users"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"supplant human fact-checkers"
],
"Ends": [
"limit misinformation"
],
"Citation Support for Narratives": [
"sense of threat"
],
"type": "automated external fact-checking"
}
],
"quotes": [
{
"Quotes (why)": [
"false rumors are damaging as they cause public panic and social unrest.for example, on august 25th of 2015, a rumor about \u201cshootouts and kid- nappings by drug gangs happening near schools in veracruz\u201d spread through twitter and facebook1. this caused severe chaos in the city involving 26 car crashes, because people left their cars in the middle of a street and rushed to pick up their children from school. this incident of a false rumor high- lights that automatically predicting the veracity of informa- tion on social media is of high practical value."
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"anecdotal"
]
},
{},
{
"Quotes (what)": [
"this incident of a false rumor highlights that automatically predicting the veracity of information on social media is of high practical value."
],
"Data Subjects": [
"social media users"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"supplant human fact-checkers"
],
"Ends": [
"limit misinformation"
]
},
{
"Quotes (what)": [
"debunking rumors at an early stage of diffusion is particularly crucial to minimizing their harmful effects. to distinguish rumors from factual events, individuals and organizations often have relied on common sense and investigative journalism. rumor reporting websites like snopes.com and factcheck.org are such collaborative efforts. however, because manual verification steps are involved in such efforts, these websites are not comprehensive in their topical coverage and also can have long debunking delay."
],
"Data Subjects": [
"social media users"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"supplant human fact-checkers"
],
"Ends": [
"limit misinformation"
]
}
]
}
},
"https://aclanthology.org/N18-1074.pdf": {
"annotator_1": {
"narratives": [
{
"type": "vague opposition",
"Data Subjects": [
"technical writers",
"product reviewers"
],
"Model Means": [
"classify/score veracity",
"evidence retrieval"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"the ever-increasing amounts of textual information available combined with the ease in sharing it through the web has increased the demand for verification, also referred to as fact checking. while it has received a lot of attention in the context of journalism, verification is important for other domains, e.g. information in scientific publications, product reviews, etc."
],
"Data Subjects": [
"technical writers",
"product reviewers"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"sense of threat"
]
},
{
"Quotes (what)": [
"in this paper we focus on verification of textual claims against textual sources. when compared to textual entailment (te)/natural language inference (dagan et al., 2009; bowman et al., 2015), the key difference is that in these tasks the passage to verify each claim is given, and in recent years it typically consists a single sentence, while in verification systems it is retrieved from a large set of documents in order to form the evidence"
],
"Model Means": [
"evidence retrieval",
"classify/score veracity"
]
}
]
},
"year": 2018,
"annotator_2": {
"narratives": [
{
"Data Subjects": [
"technical writers",
"product reviewers"
],
"Ends": [
"limit misinformation"
],
"type": "vague opposition",
"Model Means": [
"classify/score veracity",
"evidence retrieval"
]
}
],
"quotes": [
{
"Quotes (why)": [
"the ever-increasing amounts of textual information available combined with the ease in sharing it through the web has increased the demand for verification, also referred to as fact checking. while it has received a lot of attention in the context of journalism, verification is important for other domains, e.g. information in scientific publications, product reviews, etc."
],
"Data Subjects": [
"technical writers",
"product reviewers"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"sense of threat"
]
},
{
"Quotes (what)": [
"in this paper we focus on verification of textual claims against textual sources. when compared to textual entailment (te)/natural language infer- ence (dagan et al., 2009; bowman et al., 2015), the key difference is that in these tasks the passage to verify each claim is given, and in recent years it typically consists a single sentence, while in veri- fication systems it is retrieved from a large set of documents in order to form the evidence. another related task is question answering (qa), for which approaches have recently been extended to han- dle large-scale resources such as wikipedia (chen et al., 2017). however, questions typically pro- vide the information needed to identify the answer, while information missing from a claim can of- ten be crucial in retrieving refuting evidence."
],
"Model Means": [
"evidence retrieval",
"classify/score veracity"
]
},
{
"Quotes (what)": [
"in this paper we present a new dataset for claim verification, fever: fact extraction and ver- ification. it consists of 185,445 claims manually verified against the introductory sections of wikipedia pages and classified as supported, refuted or notenoughinfo"
]
},
{
"Quotes (what)": [
"to characterize the challenges posed by fever we develop a pipeline approach which, given a claim, first identifies relevant documents, then selects sentences forming the evidence from the doc- uments and finally classifies the claim w.r.t. ev- idence."
],
"Quotes (why)": [
"however, despite the rising interest in verification and fact checking among researchers, the datasets currently used for this task are limited to a few hundred claims."
],
"Model Means": [
"evidence retrieval",
"classify/score veracity"
]
}
]
}
},
"https://aclanthology.org/C18-1287.pdf": {
"annotator_1": {
"narratives": [
{
"type": "vague opposition",
"Data Subjects": [
"professional journalists"
],
"Model Means": [
"classify/score veracity"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"the proliferation of misleading information in everyday access media outlets such as social me- dia feeds, news blogs, and online newspapers have made it challenging to identify trustworthy news sources, thus increasing the need for computational tools able to provide insights into the reliability of online content."
],
"Data Subjects": [
"professional journalists"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"sense of threat"
]
},
{
"Quotes (why)": [
"we conduct a set of learning experiments to build accurate fake news detectors, and show that we can achieve accuracies of up to 76%."
],
"Model Means": [
"classify/score veracity"
]
}
]
},
"year": 2018,
"annotator_2": {
"narratives": [
{
"type": "vague identification",
"Data Subjects": [
"professional journalists"
],
"Ends": [
"limit misinformation"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"provide labels/veracity scores"
]
}
],
"quotes": [
{
"Quotes (why)": [
"fake news detection has recently attracted a growing interest from the general public and researchers as the circulation of misinformation online increases, particularly in media outlets such as social media feeds, news blogs, and online newspapers. a recent report by the jumpshot tech blog showed that facebook referrals accounted for 50% of the total traffic to fake news sites and 20% total traffic to reputablewebsites.1 since as many as 62% of u.s. adults consume news on social media(jeffreyand elisa, 2016), being able to identify fake content in online sources is a pressing need."
],
"Data Subjects": [
"professional journalists"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"news article"
]
},
{
"Quotes (what)": [
"in this paper, we develop computational resources and models for the task of fake news detection. we introduce two novel datasets covering seven different domains. one of the datasets is collected by combining manual and crowdsourced annotation approaches, while the second is collected directly from the web. using these datasets, we conduct several exploratory analyses to identify linguistic properties that are predominantly present in fake news content, and we build fake news detectors relying on linguistic features that achieve accuracies of up to 76%. to place our results in perspective, we compare the performance of the developed classifiers with an empirical human baseline."
]
},
{
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"provide labels/veracity scores"
]
}
]
}
},
"https://aclanthology.org/D17-1317.pdf": {
"annotator_1": {
"narratives": [
{
"type": "automated external fact-checking",
"Data Subjects": [
"public figures/politicians",
"professional journalists"
],
"Model Means": [
"classify/score veracity"
],
"Citation Support for Narratives": [
"investigative"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"words in news media and political discourse have a considerable power in shaping people\u2019s beliefs and opinions. as a result, their truthfulness is often compromised to maximize impact. recently, fake news has captured worldwide interest, and the number of organized efforts dedicated solely to fact-checking has almost tripled since 2014"
],
"Data Subjects": [
"professional journalists",
"public figures/politicians"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"increasing number of fact-checkers"
]
},
{
"Quotes (what)": [
"to probe the feasi- bility of automatic political fact-checking, we also present a case study based on politifact.com using their factuality judg- ments on a 6-point scale."
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"supplant human fact-checkers"
]
}
]
},
"year": 2017,
"annotator_2": {
"narratives": [
{
"Data Subjects": [
"public figures/politicians",
"professional journalists"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"supplant human fact-checkers"
],
"Citation Support for Narratives": [
"investigative"
],
"type": "automated external fact-checking",
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"words in news media and political discourse have a considerable power in shaping people\u2019s beliefs and opinions. as a result, their truthfulness is of- ten compromised to maximize impact. recently, fake news has captured worldwide interest, and the number of organized efforts dedicated solely to fact-checking has almost tripled since 2014.1 organizations, such as politifact.com, actively investigate and rate the veracity of comments made by public figures, journalists, and organizations."
],
"Data Subjects": [
"professional journalists",
"public figures/politicians"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"news article"
]
},
{
"Quotes (why)": [
"analysis indicates that falsehoods often arise from subtle differences in phrasing rather than outright fabrication (rubin et al., 2015). compared to most prior work on deception literature that focused on binary categorization of truth and deception, political fact-checking poses a new challenge as it involves a graded notion of truthfulness."
]
},
{
"Quotes (what)": [
"in this paper, we present an analytic study characterizing the language of political quotes and news media written with varying intents and degrees of truth. we also investigate graded deception detection, determining the truthfulness on a 6-point scale using the political fact-checking database available at politifact."
],
"Data Subjects": [
"professional journalists",
"public figures/politicians"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"supplant human fact-checkers"
]
}
]
}
},
"https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0150989": {
"annotator_1": {
"narratives": [
{
"type": "scientific curiosity",
"Data Subjects": [
"social media users"
],
"Data Actors": [
"scientists"
],
"Model Owners": [
"scientists"
],
"Application Means": [
"analyse data"
],
"Citation Support for Narratives": [
"scientific research"
],
"Ends": [
"develop knowledge of nlp/language"
]
},
{
"type": "vague opposition",
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"while rumours in social media are a concern, little work has been done so far to understand how they propagate. in this work we aim to help rectify this by examining in some detail rumours generated on twitter within the context of nine different newsworthy events."
],
"Data Subjects": [
"social media users"
],
"Data Actors": [
"scientists"
],
"Model Owners": [
"scientists"
],
"Application Means": [
"analyse data"
],
"Ends": [
"develop knowledge of nlp/language",
"limit misinformation"
]
},
{
"Quotes (why)": [
"whilst one can readily see users denying rumours once they have been debunked, users appear to be less capable of distinguishing true from false rumours when their veracity remains in question. in fact, we show that the prevalent tendency for users is to support every unverified rumour. [...] our study reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumours"
],
"Ends": [
"limit misinformation"
]
}
]
},
"year": 2016,
"annotator_2": {
"narratives": [
{
"type": "vague opposition"
},
{
"Data Subjects": [
"social media users"
],
"Ends": [
"limit misinformation"
],
"type": "scientific curiosity"
}
],
"quotes": [
{
"Quotes (why)": [
"the potential for spreading information quickly through a large community of users is one of the most valuable characteristics of social media platforms. social media, being open to everyone, enable not only news organisations and journalists to post news stories, but also ordinary citizens to report from their own perspectives and experiences. this broadens the scope and diversity of information that one can get from social media and some- times may even lead to stories breaking before they appear in mainstream media outlets [1]. while this often leads to having access to more comprehensive information, it also comes with caveats, one of which is the need to sift through the different information sources to assess their accuracy [2]."
],
"Data Subjects": [
"social media users"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"news article"
]
},
{
"Quotes (what)": [
"while rumours in social media are a concern, little work has been done so far to understand how they propagate. in this work we aim to help rectify this by examining in some detail rumours generated on twitter within the context of nine different newsworthy events."
],
"Data Actors": [
"scientists"
],
"Application Means": [
"analyse data"
]
},
{
"Quotes (what)": [
"our study looks at conversations around rumours in social media, exploring how social media users respond to rumours both before and after the veracity of a rumour is resolved. our study provides insight into rumour diffusion, support and denial in social media, helping both those who gather news from social media in determining accuracy of information and the development of machine learning systems that can provide assistance in real-time for assessing the veracity of rumours [6]."
],
"Quotes (why)": [
"the spread of misinformation is especially important in the context of breaking news, where new pieces of information are released piecemeal, often starting off as unverified infor- mation in the form of a rumour. these rumours then spread to large numbers of users, influ- encing perception and understanding of events, despite being unverified. social media rumours that are later proven false can have harmful consequences both for individuals and for society [3]. for instance, a rumour in 2013 about the white house having been bombed, injuring barack obama, which was tweeted from ap\u2019s twitter account by hackers, spooked stock markets in the us [4]. a major event that was similarly riddled with consequential rumours was hurricane sandy, which hit the east coast of the us in 2012. part of the city of new york suffered from power outages and many people had to rely on the internet accessed through their mobile phones for information. to prevent major incidents, the us federal emergency management agency had to set up a web page specifically for rumour control [5]."
],
"Data Subjects": [
"social media users"
],
"Ends": [
"limit misinformation"
]
},
{}
]
}
},
"https://aclanthology.org/P18-1022.pdf": {
"annotator_1": {
"narratives": [
{
"type": "assisted media consumption",
"Data Actors": [
"media consumers"
],
"Model Owners": [
"social media companies"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"provide labels/veracity scores",
"identify claims"
],
"Citation Support for Narratives": [
"vague community"
],
"Ends": [
"limit misinformation"
]
},
{
"type": "automated content moderation",
"Data Actors": [
"algorithm"
],
"Model Owners": [
"social media companies"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"identify claims"
],
"Citation Support for Narratives": [
"vague community"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"the fake news hype caused a widespread disillusionment about so- cial media, and many politicians, news publishers, it companies, activists, and scientists concur that this is where to draw the line"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"vague community"
]
},
{
"Quotes (what)": [
"many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to dis- courage repetition. while some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy."
],
"Data Actors": [
"media consumers",
"algorithm"
],
"Model Owners": [
"social media companies"
],
"Application Means": [
"identify claims",
"provide labels/veracity scores"
],
"Citation support": [
"vague community"
]
},
{
"Quotes (what)": [
"we show how a style analysis can distin- guish hyperpartisan news from the main- stream (f1 = 0.78), and satire from both (f1 = 0.81)"
],
"Model Means": [
"classify/score veracity"
]
}
]
},
"year": 2018,
"annotator_2": {
"narratives": [
{
"Application Means": [
"provide labels/veracity scores",
"identify claims"
],
"Ends": [
"limit misinformation"
],
"Citation Support for Narratives": [
"vague community"
],
"type": "automated content moderation",
"Model Owners": [
"social media companies"
],
"Data Subjects": [
"social media users"
],
"Model Means": [
"classify/score veracity"
]
}
],
"quotes": [
{
"Quotes (why)": [
"the media and the public are currently discussing a new phenomenon called \u201cfake news\u201d and its potential role in swaying recent elections, how it may affect democratic societies, and what can and should be done about it."
],
"Model Owners": [
"social media companies"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"sense of threat"
]
},
{
"Quotes (why)": [
"although traditional yellow press has been spreading \u2018news\u2019 of varying degrees of truthfulness long before the digital revolution, the fact that modern social media amplify fake news to outperform real news gives many people pause. the fake news hype caused a widespread disillusionment about social media, and many politicians, news publish- ers, it companies, activists, and scientists concur that this is where to draw the line."
],
"Data Subjects": [
"social media users"
],
"Model Owners": [
"social media companies"
],
"Ends": [
"limit misinformation"
]
},
{
"Quotes (what)": [
"many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition. while some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy. at any rate, a near- real time reaction is crucial: once a fake news item begins to spread virally, the damage is done and un- doing it becomes arduous. since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough. we have identified style-based approaches as a viable alternative, allowing for instantaneous re- actions, albeit not to fake news, but to hyperpartisanship. in this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying unmasking in a novel way."
],
"Application Means": [
"identify claims"
],
"Ends": [
"limit misinformation"
]
},
{
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"provide labels/veracity scores"
]
}
]
}
},
"https://dl.acm.org/doi/pdf/10.1145/3219819.3219903": {
"annotator_1": {
"narratives": [
{
"type": "vague identification",
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"identify claims"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"the dissemination of fake news may cause large-scale negative effects, and sometimes can affect or even manipulate important public events. for example, within the final three months of the 2016 u.s. presidential election, the fake news generated to favor either of the two nominees was believed by many people and was shared by more than 37 million times on facebook [ 1, 7 ]. therefore, it is in great need of an auto- matic detector to mitigate the serious negative effects caused by the fake news"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"identify claims"
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"anecdote"
]
}
]
},
"year": 2018,
"annotator_2": {
"narratives": [
{
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"identify claims"
],
"type": "vague identification",
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"the recent proliferation of social media has significantly changed the way in which people acquire information. nowadays, there are increasingly more people consuming news through social media, which can provide timely and comprehensive multimedia information on the events taking place all over the world. compared with traditional text news, the news with images and videos can provide a better storytelling and attract more attention from readers. unfortunately, this is also taken advantage by fake news which usually contain misrepresented or even forged images, to mislead the readers and get rapid dissemination."
],
"Ends": [
"limit misinformation"
],
"Citation support": [
"scientific article"
]
},
{
"Quotes (why)": [
"the dissemination of fake news may cause large-scale negative effects, and sometimes can affect or even manipulate important public events. for example, within the final three months of the 2016 u.s. presidential election, the fake news generated to favor either of the two nominees was believed by many people and was shared by more than 37 million times on facebook [1, 7]. therefore, it is in great need of an auto- matic detector to mitigate the serious negative effects caused by the fake news."
],
"Ends": [
"limit misinformation"
]
},
{
"Quotes (why)": [
"thus far, various fake news detection approaches, including both traditional learning [6, 15, 29] and deep learning based models [21, 25], have been exploited to identify fake news. with sufficient verified posts on different events, existing deep learning models have achieved performance improvement over traditional ones due to their superior ability of feature extraction. however, they are still not able to handle the unique challenge of fake news detection, i.e., detecting fake news on newly emerged and time-critical events [27]. due to lack of the corresponding prior knowledge, the verified posts about such events can be hardly obtained in a timely manner, which leads to the unsatisfactory performance of existing models. actually, existing models tend to capture lots of event-specific features which are not shared among different events. such event-specific features, though being able to help classify the posts on verified events, would hurt the detection with regard to newly emerged events."
],
"Ends": [
"limit misinformation"
]
},
{
"Quotes (what)": [
"for this reason, instead of capturing event-specific features, we believe that learning the shared features among all the events would help us with the detection of fake news from unverified posts. therefore, the goal of this work is to design an effective model to remove the nontransferable event-specific features and preserve the shared features among all the events for the task of identifying fake news"
],
"Ends": [
"limit misinformation"
]
},
{
"Quotes (what)": [
"to remove event-specific features, the first step is to identify them. for posts on different events, they have their own unique or specific features that are not sharable. such features can be de- tected by measuring the difference among posts corresponding to different events. here the posts can be represented by the learned features. thus, identifying event-specific features is equivalent to measuring the difference among learned features on different events. however, it is a technically challenging problem. first, since the learned feature representations of posts are high-dimensional, simple metrics like the squared error may not be able to estimate the differences among such complicated feature representations. second, the feature representations keep changing during the train- ing stage. this requires the proposed measurement mechanism to capture the changes of feature representations and consistently pro- vide the accurate measurement. although this is very challenging, the effective estimation of dissimilarities among the learned fea- tures on different events is the premise of removing event-specific features. thus, how to effectively estimate the dissimilarities under this condition is the challenge that we have to address. in order to address the aforementioned challenges, we propose an end-to-end framework referred to as event adversarial neural networks (eann) for fake news detection based on multi-modal features. inspired by the idea of adversarial networks [10], we incor- porate the event discriminator to predict the event auxiliary labels during training stage, and the corresponding loss can be used to estimate the dissimilarities of feature representations among differ- ent events."
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"identify claims"
]
},
{
"Quotes (what)": [
"the proposed model eann consists of three main compo- nents: the multi-modal feature extractor, the fake news detector, and the event discriminator. the multi-modal feature extractor cooperates with the fake news detector to carry out the major task of identifying false news. simultaneously, the multi-modal feature extractor tries to fool the event discriminator to learn the event invariant representations."
],
"Model Means": [
"classify/score veracity"
]
}
]
}
},
"https://aclanthology.org/W16-0802.pdf": {
"annotator_1": {
"narratives": [
{
"type": "automated content moderation",
"Data Subjects": [
"social media users"
],
"Data Actors": [
"algorithm"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"automated removal"
],
"Ends": [
"limit misinformation"
]
},
{
"type": "assisted media consumption",
"Data Subjects": [
"social media users"
],
"Data Actors": [
"media consumers"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"provide labels/veracity scores"
],
"Ends": [
"limit misinformation"
]
},
{
"type": "assisted internal fact-checking",
"Data Subjects": [
"professional journalists"
],
"Data Actors": [
"professional journalists"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"provide labels/veracity scores"
],
"Ends": [
"limit misinformation"
]
}
],
"quotes": [
{
"Quotes (why)": [
"high rates of media consumption and low trust in news institutions create an optimal environment for the \u201crapid viral spread of information that is either intentionally or unintentionally misleading or pro- vocative\u201d (howell, 2013). journalists and other content producers are incentivized towards speed and spectacle over accuracy (chen, conroy, & rubin, 2015) and content consumers often lack the literacy skills required to interpret news critically (hango, 2014). what is needed for both content producers and consumers is an automated assistive tool that can save time and cognitive effort by flagging/filtering inaccurate or false information."
],
"Data Subjects": [
"professional journalists",
"social media users"
],
"Data Actors": [
"professional journalists",
"media consumers",
"algorithm"
],
"Model Means": [
"classify/score veracity"
],
"Application Means": [
"provide labels/veracity scores",
"automated removal"
],
"Ends": [
"limit misinformation"
],