-
Notifications
You must be signed in to change notification settings - Fork 1
/
OLD_aises_1_2
594 lines (592 loc) · 33.7 KB
/
OLD_aises_1_2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
<style>
.storybox{
border-radius: 15px;
border: 2px solid gray;
background-color: lightgray;
text-align: left;
padding: 10px;
}
</style>
<style>
.storyboxlegend{
border-bottom-style: solid;
border-bottom-color: gray;
border-bottom-width: 3px;
margin-left: -12px;
margin-right: -12px; margin-top: -13px;
padding: 0.2em 1em; color: #ffffff;
background-color: gray;
border-radius: 15px 15px 0px 0px}
</style>
</head>
<body>
<h1 id="sec:malicious">1.2 Malicious Use</h1>
<p>On the morning of March 20, 1995, five men entered the Tokyo subway
system. After boarding separate subway lines, they continued for several
stops before dropping the bags they were carrying and exiting. An
odorless, colorless liquid inside the bags began to vaporize. Within
minutes, commuters began choking and vomiting. The trains continued on
toward the heart of Tokyo, with sickened passengers leaving the cars at
each station. The fumes were spread at each stop, either by emanating
from the tainted cars or through contact with people’s clothing and
shoes. By the end of the day, 13 people lay dead and 5,800 seriously
injured. The group responsible for the attack was the religious cult Aum
Shinrikyo <span class="citation" data-cites="Olson1999AumSO">[1]</span>.
Its motive for murdering innocent people? To bring about the end of the
world.<p>
Powerful new technologies offer tremendous potential benefits, but they
also carry the risk of empowering malicious actors to cause widespread
harm. There will always be those with the worst of intentions, and AIs
could provide them with a formidable tool to achieve their objectives.
Moreover, as AI technology advances, severe malicious use could
potentially destabilize society, increasing the likelihood of other
risks.<p>
In this section, we will explore the various ways in which the malicious
use of advanced AIs could pose catastrophic risks. These include
engineering biochemical weapons, unleashing rogue AIs, using persuasive
AIs to spread propaganda and erode consensus reality, and leveraging
censorship and mass surveillance to irreversibly concentrate power. We
will conclude by discussing possible strategies for mitigating the risks
associated with the malicious use of AIs.</p>
<p><strong>Unilateral actors considerably increase the risks of
malicious use.</strong> In instances where numerous actors have access
to a powerful technology or dangerous information that could be used for
harmful purposes, it only takes one individual to cause significant
devastation. Malicious actors themselves are the clearest example of
this, but recklessness can be equally dangerous. For example, a single
research team might be excited to open source an AI system with
biological research capabilities, which would speed up research and
potentially save lives, but this could also increase the risk of
malicious use if the AI system could be repurposed to develop
bioweapons. In situations like this, the outcome may be determined by
the least risk-averse research group. If only one research group thinks
the benefits outweigh the risks, it could act unilaterally, deciding the
outcome even if most others don’t agree. And if they are wrong and
someone does decide to develop a bioweapon, it would be too late to
reverse course.<p>
By default, advanced AIs may increase the destructive capacity of both
the most powerful and the general population. Thus, the growing
potential for AIs to empower malicious actors is one of the most severe
threats humanity will face in the coming decades. The examples we give
in this section are only those we can foresee. It is possible that AIs
could aid in the creation of dangerous new technology we cannot
presently imagine, which would further increase risks from malicious
use.</p>
<h2 id="bioterrorism">1.2.1 Bioterrorism</h2>
<p>The rapid advancement of AI technology increases the risk of
bioterrorism. AIs with knowledge of bioengineering could facilitate the
creation of novel bioweapons and lower barriers to obtaining such
agents. Engineered pandemics from AI-assisted bioweapons pose a unique
challenge, as attackers have an advantage over defenders and could
constitute an existential threat to humanity. We will now examine these
risks and how AIs might exacerbate challenges in managing bioterrorism
and engineered pandemics.</p>
<p><strong>Bioengineered pandemics present a new threat.</strong>
Biological agents, including viruses and bacteria, have caused some of
the most devastating catastrophes in history. It’s believed the Black
Death killed more humans than any other event in history, an astounding
and awful 200 million, the equivalent to four billion deaths today.
While contemporary advancements in science and medicine have made great
strides in mitigating risks associated with natural pandemics,
engineered pandemics could be designed to be more lethal or easily
transmissible than natural pandemics, presenting a new threat that could
equal or even surpass the devastation wrought by history’s most deadly
plagues <span class="citation"
data-cites="esvelt2022delay">[2]</span>.<p>
Humanity has a long and dark history of weaponizing pathogens, with
records dating back to 1320 BCE describing a war in Asia Minor where
infected sheep were driven across the border to spread Tularemia <span
class="citation" data-cites="Trevisanato2007TheP">[3]</span>. During the
twentieth century, 15 countries are known to have developed bioweapons
programs, including the US, USSR, UK, and France. Like chemical weapons,
bioweapons have become a taboo among the international community. While
some state actors continue to operate bioweapons programs <span
class="citation" data-cites="us_state_department_2022">[4]</span>, a
more significant risk may come from non-state actors like Aum Shinrikyo,
ISIS, or simply disturbed individuals. Due to advancements in AI and
biotechnology, the tools and knowledge necessary to engineer pathogens
with capabilities far beyond Cold War-era bioweapons programs will
rapidly democratize.</p>
<p><strong>Biotechnology is progressing rapidly and becoming more
accessible.</strong> A few decades ago, the ability to synthesize new
viruses was limited to a handful of the top scientists working in
advanced laboratories. Today it is estimated that there are 30,000
people with the talent, training, and access to technology to create new
pathogens <span class="citation"
data-cites="esvelt2022delay">[2]</span>. This figure could rapidly
expand. Gene synthesis, which allows the creation of custom biological
agents, has dropped precipitously in price, with its cost halving
approximately every 15 months <span class="citation"
data-cites="carlson_changing_2009">[5]</span>. Furthermore, with the
advent of benchtop DNA synthesis machines, access will become much
easier and could avoid existing gene synthesis screening efforts, which
complicates controlling the spread of such technology <span
class="citation" data-cites="carter2023benchtop">[6]</span>. The chances
of a bioengineered pandemic killing millions, perhaps billions, is
proportional to the number of people with the skills and access to the
technology to synthesize them. With AI assistants, orders of magnitude
more people could have the required skills, thereby increasing the risks
by orders of magnitude.</p>
<p><strong>AIs could be used to expedite the discovery of new, more
deadly chemical and biological weapons.</strong> In 2022, researchers
took an AI system designed to create new drugs by generating non-toxic,
therapeutic molecules and tweaked it to reward, rather than penalize,
toxicity <span class="citation"
data-cites="Urbina2022DualUO">[7]</span>. After this simple change,
within six hours, it generated 40,000 candidate chemical warfare agents
entirely on its own. It designed not just known deadly chemicals
including VX, but also novel molecules that may be deadlier than any
chemical warfare agents discovered so far. In the field of biology, AIs
have already surpassed human abilities in protein structure prediction
<span class="citation" data-cites="AlphaFold2021">[8]</span> and made
contributions to synthesizing those proteins <span class="citation"
data-cites="wu2019machine">[9]</span>. Similar methods could be used to
create bioweapons and develop pathogens that are deadlier, more
transmissible, and more difficult to treat than anything seen
before.</p>
<p><strong>AIs compound the threat of bioengineered pandemics.</strong>
AIs will increase the number of people who could commit acts of
bioterrorism. General-purpose AIs like ChatGPT are capable of
synthesizing expert knowledge about the deadliest known pathogens, such
as influenza and smallpox, and providing step-by-step instructions about
how a person could create them while evading safety protocols <span
class="citation" data-cites="Soice2023CanLL">[10]</span>. Future
versions of AIs could be even more helpful to potential bioterrorists
when AIs are able to synthesize information into techniques, processes,
and knowledge that is not explicitly available anywhere on the internet.
Public health authorities may respond to these threats with safety
measures, but in bioterrorism, the attacker has the advantage. The
exponential nature of biological threats means that a single attack
could spread to the entire world before an effective defense could be
mounted. Only 100 days after being detected and sequenced, the omicron
variant of COVID-19 had infected a quarter of the United States and half
of Europe <span class="citation"
data-cites="esvelt2022delay">[2]</span>. Quarantines and lockdowns
instituted to suppress the COVID-19 pandemic caused a global recession
and still could not prevent the disease from killing millions
worldwide.<p>
In summary, advanced AIs could constitute a weapon of mass destruction
in the hands of terrorists, by making it easier for them to design,
synthesize, and spread deadly new pathogens. By reducing the required
technical expertise and increasing the lethality and transmissibility of
pathogens, AIs could enable malicious actors to cause global catastrophe
by unleashing pandemics.</p>
<h2 id="unleashing-ai-agents">1.2.2 Unleashing AI Agents</h2>
<p>Many technologies are <em>tools</em> that humans use to pursue our
goals, such as hammers, toasters, and toothbrushes. But AIs are
increasingly built as <em>agents</em> which autonomously take actions in
the world in order to pursue open-ended goals. AI agents can be given
goals such as winning games, making profits on the stock market, or
driving a car to a destination. AI agents therefore pose a unique risk:
people could build AIs that pursue dangerous goals.</p>
<p><strong>Malicious actors could intentionally create rogue
AIs.</strong> One month after the release of GPT-4, an open-source
project bypassed the AI’s safety filters and turned it into an
autonomous AI agent instructed to “destroy humanity,” “establish global
dominance,” and “attain immortality.” Dubbed ChaosGPT, the AI compiled
research on nuclear weapons and sent tweets trying to influence others.
Fortunately, ChaosGPT was merely a warning given that it lacked the
ability to successfully formulate long-term plans, hack computers, and
survive and spread. Yet given the rapid pace of AI development, ChaosGPT
did offer a glimpse into the risks that more advanced rogue AIs could
pose in the near future.</p>
<p><strong>Many groups may want to unleash AIs or have AIs displace
humanity.</strong> Simply unleashing rogue AIs, like a more
sophisticated version of ChaosGPT, could accomplish mass destruction,
even if those AIs aren’t explicitly told to harm humanity. There are a
variety of beliefs that may drive individuals and groups to do so. One
ideology that could pose a unique threat in this regard is
“accelerationism.” This ideology seeks to accelerate AI development as
rapidly as possible and opposes restrictions on the development or
proliferation of AIs. This sentiment is common among many leading AI
researchers and technology leaders, some of whom are intentionally
racing to build AIs more intelligent than humans. According to Google
co-founder Larry Page, AIs are humanity’s rightful heirs and the next
step of cosmic evolution. He has also expressed the sentiment that
humans maintaining control over AIs is “speciesist” <span
class="citation" data-cites="tegmark2018life">[11]</span>. Jürgen
Schmidhuber, an eminent AI scientist, argued that “In the long run,
humans will not remain the crown of creation... But that’s okay because
there is still beauty, grandeur, and greatness in realizing that you are
a tiny part of a much grander scheme which is leading the universe from
lower complexity towards higher complexity” <span class="citation"
data-cites="pooley2020">[12]</span>. Richard Sutton, another leading AI
scientist, in discussing smarter-than human AI asked “why shouldn’t
those who are the smartest become powerful?” and thinks the development
of superintelligence will be an achievement “beyond humanity, beyond
life, beyond good and bad” <span class="citation"
data-cites="sutton_it_2022">[13]</span>. He argues that “succession to
AI is inevitable,” and while “they could displace us from existence,”
“we should not resist succession” <span class="citation"
data-cites="sutton_succession_2023">[14]</span>.<p>
There are several sizable groups who may want to unleash AIs to
intentionally cause harm. For example, sociopaths and psychopaths make
up around 3 percent of the population <span class="citation"
data-cites="SanzGarca2021PrevalenceOP">[15]</span>. In the future,
people who have their livelihoods destroyed by AI automation may grow
resentful, and some may want to retaliate. There are plenty of cases in
which seemingly mentally stable individuals with no history of insanity
or violence suddenly go on a shooting spree or plant a bomb with the
intent to harm as many innocent people as possible. We can also expect
well-intentioned people to make the situation even more challenging. As
AIs advance, they could make ideal companions—knowing how to provide
comfort, offering advice when needed, and never demanding anything in
return. Inevitably, people will develop emotional bonds with chatbots,
and some will demand that they be granted rights or become
autonomous.<p>
In summary, releasing powerful AIs and allowing them to take actions
independently of humans could lead to a catastrophe. There are many
reasons that people might pursue this, whether because of a desire to
cause harm, an ideological belief in technological acceleration, or a
conviction that AIs should have the same rights and freedoms as
humans.</p>
<h2 id="persuasive-ais">1.2.3 Persuasive AIs</h2>
<p>The deliberate propagation of disinformation is already a serious
issue, reducing our shared understanding of reality and polarizing
opinions. AIs could be used to severely exacerbate this problem by
generating personalized disinformation on a larger scale than before.
Additionally, as AIs become better at predicting and nudging our
behavior, they will become more capable at manipulating us. We will now
discuss how AIs could be leveraged by malicious actors to create a
fractured and dysfunctional society.</p>
<p><strong>AIs could pollute the information ecosystem with motivated
lies.</strong> Sometimes ideas spread not because they are true, but
because they serve the interests of a particular group. “Yellow
journalism” was coined as a pejorative reference to newspapers that
advocated war between Spain and the United States in the late 19th
century, because they believed that sensational war stories would boost
their sales <span class="citation"
data-cites="yellowjournalism">[16]</span>. When public information
sources are flooded with falsehoods, people will sometimes fall prey to
lies, or else come to distrust mainstream narratives, both of which
undermine societal integrity.<p>
Unfortunately, AIs could escalate these existing problems dramatically.
First, AIs could be used to generate unique, personalized disinformation
at a large scale. While there are already many social media bots <span
class="citation" data-cites="Varol2017OnlineHI">[17]</span>, some of
which exist to spread disinformation, historically they have been run by
humans or primitive text generators. The latest AI systems do not need
humans to generate personalized messages, never get tired, and could
potentially interact with millions of users at once <span
class="citation" data-cites="Burtell2023ArtificialIA">[18]</span>.</p>
<p><strong>AIs can exploit users’ trust.</strong> Already, hundreds of
thousands of people pay for chatbots marketed as lovers and friends
<span class="citation" data-cites="Tong2023">[19]</span>, and one man’s
suicide has been partially attributed to interactions with a chatbot
<span class="citation" data-cites="Lovens2023">[20]</span>. As AIs
appear increasingly human-like, people will increasingly form
relationships with them and grow to trust them. AIs that gather personal
information through relationship-building or by accessing extensive
personal data, such as a user’s email account or personal files, could
leverage that information to enhance persuasion. Powerful actors that
control those systems could exploit user trust by delivering
personalized disinformation directly through people’s “friends.”</p>
<p><strong>AIs could centralize control of trusted information.</strong>
Separate from democratizing disinformation, AIs could centralize the
creation and dissemination of trusted information. Only a few actors
have the technical skills and resources to develop cutting-edge AI
systems, and they could use these AIs to spread their preferred
narratives. Alternatively, if AIs are broadly accessible this could lead
to widespread disinformation, with people retreating to trusting only a
small handful of authoritative sources <span class="citation"
data-cites="Vaccari2020DeepfakesAD">[21]</span>. In both scenarios,
there would be fewer sources of trusted information and a small portion
of society would control popular narratives.<p>
AI censorship could further centralize control of information. This
could begin with good intentions, such as using AIs to enhance
fact-checking and help people avoid falling prey to false narratives.
This would not necessarily solve the problem, as disinformation persists
today despite the presence of fact-checkers.<p>
Even worse, purported “fact-checking AIs” might be designed by
authoritarian governments and others to suppress the spread of true
information. Such AIs could be designed to correct most common
misconceptions but provide incorrect information about some sensitive
topics, such as human rights violations committed by certain countries.
But even if fact-checking AIs work as intended, the public might
eventually become entirely dependent on them to adjudicate the truth,
reducing people’s autonomy and making them vulnerable to failures or
hacks of those systems.<p>
In a world with widespread persuasive AI systems, people’s beliefs might
be almost entirely determined by which AI systems they interact with
most. Never knowing whom to trust, people could retreat even further
into ideological enclaves, fearing that any information from outside
those enclaves might be a sophisticated lie. This would erode consensus
reality, people’s ability to cooperate with others, participate in civil
society, and address collective action problems. This would also reduce
our ability to have a conversation as a species about how to mitigate
existential risks from AIs.<p>
In summary, AIs could create highly effective, personalized
disinformation on an unprecedented scale, and could be particularly
persuasive to people they have built personal relationships with. In the
hands of many people, this could create a deluge of disinformation that
debilitates human society, but, kept in the hands of a few, it could
allow governments to control narratives for their own ends.</p>
<h2 id="concentration-of-power">1.2.4 Concentration of Power</h2>
<p>We have discussed several ways in which individuals and groups might
use AIs to cause widespread harm, through bioterrorism; releasing
powerful, uncontrolled AIs; and disinformation. To mitigate these risks,
governments might pursue intense surveillance and seek to keep AIs in
the hands of a trusted minority. This reaction, however, could easily
become an overcorrection, paving the way for an entrenched totalitarian
regime that would be locked in by the power and capacity of AIs. This
scenario represents a form of “top-down” misuse, as opposed to
“bottom-up” misuse by citizens, and could in extreme cases culminate in
an entrenched dystopian civilization.</p>
<p><strong>AIs could lead to extreme, and perhaps irreversible
concentration of power.</strong> The persuasive abilities of AIs
combined with their potential for surveillance and the advancement of
autonomous weapons could allow small groups of actors to “lock-in” their
control over society, perhaps permanently. To operate effectively, AIs
require a broad set of infrastructure components, which are not equally
distributed, such as data centers, computing power, and big data. Those
in control of powerful systems may use them to suppress dissent, spread
propaganda and disinformation, and otherwise advance their goals, which
may be contrary to public wellbeing.</p>
<p><strong>AIs may entrench a totalitarian regime.</strong> In the hands
of the state, AIs may result in the erosion of civil liberties and
democratic values in general. AIs could allow totalitarian governments
to efficiently collect, process, and act on an unprecedented volume of
information, permitting an ever smaller group of people to surveil and
exert complete control over the population without the need to enlist
millions of citizens to serve as willing government functionaries.
Overall, as power and control shift away from the public and toward
elites and leaders, democratic governments are highly vulnerable to
totalitarian backsliding. Additionally, AIs could make totalitarian
regimes much longer-lasting; a major way in which such regimes have been
toppled previously is at moments of vulnerability like the death of a
dictator, but AIs, which would be hard to “kill,” could provide much
more continuity to leadership, providing few opportunities for
reform.</p>
<p><strong>AIs can entrench corporate power at the expense of the public
good.</strong> Corporations have long lobbied to weaken laws and
policies that restrict their actions and power, all in the service of
profit. Corporations in control of powerful AI systems may use them to
manipulate customers into spending more on their products even to the
detriment of their own wellbeing. The concentration of power and
influence that could be afforded by AIs could enable corporations to
exert unprecedented control over the political system and entirely drown
out the voices of citizens. This could occur even if creators of these
systems know their systems are self-serving or harmful to others, as
they would have incentives to reinforce their power and avoid
distributing control.</p>
<p><strong>In addition to power, locking in certain values may curtail
humanity’s moral progress.</strong> It’s dangerous to allow any set of
values to become permanently entrenched in society. For example, AI
systems have learned racist and sexist views <span class="citation"
data-cites="nadeem_stereoset_2021">[22]</span>, and once those views are
learned, it can be difficult to fully remove them. In addition to
problems we know exist in our society, there may be some we still do
not. Just as we abhor some moral views widely held in the past, people
in the future may want to move past moral views that we hold today, even
those we currently see no problem with. For example, moral defects in AI
systems would be even worse if AI systems had been trained in the 1960s,
and many people at the time would have seen no problem with that. We may
even be unknowingly perpetuating moral catastrophes today <span
class="citation" data-cites="williams_possibility_2015">[23]</span>.
Therefore, when advanced AIs emerge and transform the world, there is a
risk of their objectives locking in or perpetuating defects in today’s
values. If AIs are not designed to continuously learn and update their
understanding of societal values, they may perpetuate or reinforce
existing defects in their decision-making processes long into the
future.<p>
In summary, although keeping powerful AIs in the hands of a few might
reduce the risks of terrorism, it could further exacerbate power
inequality if misused by governments and corporations. This could lead
to totalitarian rule and intense manipulation of the public by
corporations, and could lock in current values, preventing any further
moral progress.</p>
<br>
<div class="storybox">
<legend class="storyboxlegend">
<span><b>Story: Bioterrorism</b></span> <em>
</legend>
<p>The following is an illustrative
hypothetical story to help readers envision some of these risks. This
story is nonetheless somewhat vague to reduce the risk of inspiring
malicious actions based on it.</em></p>
<p>A biotechnology startup is making waves in the industry with its
AI-powered bioengineering model. The company has made bold claims that
this new technology will revolutionize medicine through its ability to
create cures for both known and unknown diseases. The company did,
however, stir up some controversy when it decided to release the program
to approved researchers in the scientific community. Only weeks after
its decision to make the model open-source on a limited basis, the full
model was leaked on the internet for all to see. Its critics pointed out
that the model could be repurposed to design lethal pathogens and
claimed that the leak provided bad actors with a powerful tool to cause
widespread destruction, opening it up to abuse without safeguards in
place.<p>
Unknown to the public, an extremist group has been working for years to
engineer a new virus designed to kill large numbers of people. Yet given
their lack of expertise, these efforts have so far been unsuccessful.
When the new AI system is leaked, the group immediately recognizes it as
a potential tool to design the virus and circumvent legal and monitoring
obstacles to obtain the necessary raw materials. The AI system
successfully designs exactly the kind of virus the extremist group was
hoping for. It also provides step-by-step instructions on how to
synthesize large quantities of the virus and circumvent any obstacles to
spreading it. With the synthesized virus in hand, the extremist group
devises a plan to release the virus in several carefully chosen
locations in order to maximize its spread.<p>
The virus has a long incubation period and spreads silently and quickly
throughout the population for months. By the time it is detected, it has
already infected millions and has an alarmingly high mortality rate.
Given its lethality, most who are infected will ultimately die. The
virus may or may not be contained eventually, but not before it kills
millions of people.</p>
</div>
<br>
<br>
<h3>References</h3>
<div id="refs" class="references csl-bib-body" data-entry-spacing="0"
role="list">
<div id="ref-Olson1999AumSO" class="csl-entry" role="listitem">
<div class="csl-left-margin">[1] K.
Olson, <span>“Aum shinrikyo: Once and future threat?”</span>
<em>Emerging Infectious Diseases</em>, vol. 5, pp. 513–516, 1999.</div>
</div>
<div id="ref-esvelt2022delay" class="csl-entry" role="listitem">
<div class="csl-left-margin">[2] K.
M. Esvelt, <span>“Delay, detect, defend: Preparing for a future in which
thousands can release new pandemics,”</span> Geneva Papers.</div>
</div>
<div id="ref-Trevisanato2007TheP" class="csl-entry" role="listitem">
<div class="csl-left-margin">[3] S.
I. Trevisanato, <span>“The ’hittite plague’, an epidemic of tularemia
and the first record of biological warfare.”</span> <em>Medical
hypotheses</em>, vol. 69 6, pp. 1371–4, 2007.</div>
</div>
<div id="ref-us_state_department_2022" class="csl-entry"
role="listitem">
<div class="csl-left-margin">[4] U.
S. D. of State, <span>“Adherence to and compliance with arms control,
nonproliferation, and disarmament agreements and commitments,”</span>
U.S. Department of State, Government Report, 2022.</div>
</div>
<div id="ref-carlson_changing_2009" class="csl-entry" role="listitem">
<div class="csl-left-margin">[5] R.
Carlson, <span>“The changing economics of <span>DNA</span>
synthesis,”</span> <em>Nature Biotechnology</em>, vol. 27, no. 12, pp.
1091–1094, Dec. 2009.</div>
</div>
<div id="ref-carter2023benchtop" class="csl-entry" role="listitem">
<div class="csl-left-margin">[6] S.
R. Carter, J. M. Yassif, and C. Isaac, <span>“Benchtop DNA synthesis
devices: Capabilities, biosecurity implications, and governance,”</span>
Nuclear Threat Initiative, Report, 2023.</div>
</div>
<div id="ref-Urbina2022DualUO" class="csl-entry" role="listitem">
<div class="csl-left-margin">[7] F.
Urbina, F. Lentzos, C. Invernizzi, and S. Ekins, <span>“Dual use of
artificial-intelligence-powered drug discovery,”</span> <em>Nature
Machine Intelligence</em>, vol. 4, pp. 189–191, 2022.</div>
</div>
<div id="ref-AlphaFold2021" class="csl-entry" role="listitem">
<div class="csl-left-margin">[8] J.
Jumper <em>et al.</em>, <span>“Highly accurate protein structure
prediction with <span>AlphaFold</span>,”</span> <em>Nature</em>, vol.
596, no. 7873, pp. 583–589, 2021.</div>
</div>
<div id="ref-wu2019machine" class="csl-entry" role="listitem">
<div class="csl-left-margin">[9] Z.
Wu, S. J. Kan, R. D. Lewis, B. J. Wittmann, and F. H. Arnold,
<span>“Machine learning-assisted directed protein evolution with
combinatorial libraries,”</span> <em>Proceedings of the National Academy
of Sciences</em>, vol. 116, no. 18, pp. 8852–8858, 2019.</div>
</div>
<div id="ref-Soice2023CanLL" class="csl-entry" role="listitem">
<div class="csl-left-margin">[10] E.
Soice, R. H. S. Rocha, K. Cordova, M. A. Specter, and K. M. Esvelt,
<span>“Can large language models democratize access to dual-use
biotechnology?”</span> 2023.</div>
</div>
<div id="ref-tegmark2018life" class="csl-entry" role="listitem">
<div class="csl-left-margin">[11] M.
Tegmark, <em>Life 3.0: Being human in the age of artificial
intelligence</em>. Vintage, 2018.</div>
</div>
<div id="ref-pooley2020" class="csl-entry" role="listitem">
<div class="csl-left-margin">[12] L.
Pooley, <span>“We need to talk about <span>A.I.</span>”</span> New
Zealand, 2020.</div>
</div>
<div id="ref-sutton_it_2022" class="csl-entry" role="listitem">
<div class="csl-left-margin">[13] </div><div
class="csl-right-inline">Richard Sutton [@RichardSSutton], <span>“It
will be the greatest intellectual achievement of all time.
<span>An</span> achievement of science, of engineering, and of the
humanities, whose significance is beyond humanity, beyond life, beyond
good and bad.”</span> <em>Twitter</em>. Sep. 2022.</div>
</div>
<div id="ref-sutton_succession_2023" class="csl-entry" role="listitem">
<div class="csl-left-margin">[14] R.
Sutton, <span>“AI succession,”</span> <em>Youtube</em>. Sep. 2023.</div>
</div>
<div id="ref-SanzGarca2021PrevalenceOP" class="csl-entry"
role="listitem">
<div class="csl-left-margin">[15] A.
Sanz-García, C. Gesteira, J. Sanz, and M. P. García-Vera,
<span>“Prevalence of psychopathy in the general adult population: A
systematic review and meta-analysis,”</span> <em>Frontiers in
Psychology</em>, vol. 12, 2021.</div>
</div>
<div id="ref-yellowjournalism" class="csl-entry" role="listitem">
<div class="csl-left-margin">[16] U.
S. D. of State Office of The Historian, <span>“U.s. Diplomacy and yellow
journalism, 1895–1898.”</span></div>
</div>
<div id="ref-Varol2017OnlineHI" class="csl-entry" role="listitem">
<div class="csl-left-margin">[17] O.
Varol, E. Ferrara, C. A. Davis, F. Menczer, and A. Flammini,
<span>“Online human-bot interactions: Detection, estimation, and
characterization,”</span> <em>ArXiv</em>, vol. abs/1703.03107,
2017.</div>
</div>
<div id="ref-Burtell2023ArtificialIA" class="csl-entry" role="listitem">
<div class="csl-left-margin">[18] M.
Burtell and T. Woodside, <span>“Artificial influence: An analysis of
AI-driven persuasion,”</span> <em>ArXiv</em>, vol. abs/2303.08721,
2023.</div>
</div>
<div id="ref-Tong2023" class="csl-entry" role="listitem">
<div class="csl-left-margin">[19] A.
Tong, <span>“What happens when your AI chatbot stops loving you
back?”</span> <em>Reuters</em>, 2023.</div>
</div>
<div id="ref-Lovens2023" class="csl-entry" role="listitem">
<div class="csl-left-margin">[20] </div><div
class="csl-right-inline">P.-F. Lovens, <span>“Sans ces conversations
avec le chatbot eliza, mon mari serait toujours là,”</span> <em>La
Libre</em>, 2023.</div>
</div>
<div id="ref-Vaccari2020DeepfakesAD" class="csl-entry" role="listitem">
<div class="csl-left-margin">[21] C.
Vaccari and A. Chadwick, <span>“Deepfakes and disinformation: Exploring
the impact of synthetic political video on deception, uncertainty, and
trust in news,”</span> <em>Social Media + Society</em>, vol. 6,
2020.</div>
</div>
<div id="ref-nadeem_stereoset_2021" class="csl-entry" role="listitem">
<div class="csl-left-margin">[22] M.
Nadeem, A. Bethke, and S. Reddy, <span>“<span>StereoSet</span>:
<span>Measuring</span> stereotypical bias in pretrained language
models,”</span> in <em>Proceedings of the 59th <span>Annual</span>
<span>Meeting</span> of the <span>Association</span> for
<span>Computational</span> <span>Linguistics</span> and the 11th
<span>International</span> <span>Joint</span> <span>Conference</span> on
<span>Natural</span> <span>Language</span> <span>Processing</span>
(<span>Volume</span> 1: <span>Long</span> <span>Papers</span>)</em>,
Online: Association for Computational Linguistics, Aug. 2021, pp.
5356–5371.</div>
</div>
<div id="ref-williams_possibility_2015" class="csl-entry"
role="listitem">
<div class="csl-left-margin">[23] E.
G. Williams, <span>“The <span>Possibility</span> of an
<span>Ongoing</span> <span>Moral</span>
<span>Catastrophe</span>,”</span> <em>Ethical Theory and Moral
Practice</em>, vol. 18, no. 5, pp. 971–982, Nov. 2015.</div>
</div>
</div>
</body>
</html>