Restricting Dangerous Research: Has It Worked Before, and Could It Work for AI?
When creating policies to deal with AI’s rapid progress, it is important to minimize dangerous AI capability gains that could cause a catastrophe. While restricting scientific research is controversial, it is worthwhile to look at past instances to see if there are lessons that can be applied to limiting or prohibiting certain types of AI research.
This paper reviews previous cryptographic, nuclear, chemical, and biological research restrictions. Each has policy issues that are relevant to understanding how various forms of AI research might be disincentivized or prohibited.
Cryptographic Research
Cryptography played an essential role in World War II. The German Reich used the Enigma cipher to communicate, and Britain’s success in cracking the code gave it key intelligence to help with their war efforts1.
In 1954, the American government classified cryptography as a munition2. This meant that cryptography was moved to the same legal category as tanks, missiles, and explosives. As a result, it had the same export controls as these weapons via the U.S. Munitions List (USML), which was governed by the International Traffic in Arms Regulations (ITAR)3. Thus, exporting this technology without government approval became a federal crime. Furthermore, American academic researchers could potentially face legal consequences for collaborating on cryptographic research with academic researchers from other countries.
By the 1960s, cryptography began to have commercial applications, which complicated the regulatory framework4. Companies needed strong encryption for wire transfers, and large organizations started using mainframe computers (where multiple users were able to access the same computer). In 1977, the Data Encryption Standard (DES) became the federal encryption standard5. This allowed banks and other companies to take advantage of cryptography’s commercial uses.
In the late 1970s, public-key cryptography also arose. Two researchers, Whitfield Diffie and Martin Hellman, published a paper in 1976 called “New Directions in Cryptography”6, where they introduced a method for anyone to encrypt files, but only private “key-holders” would be able to decrypt the files. This contrasts with previous cryptographic methods where both parties had to first share a private key. More specifically, this meant that one party could give information to another party without the two parties communicating beforehand. The NSA was resistant to this technology because it made it easier for anyone to use cryptography to hide secrets.
While Diffie’s and Hellman’s paper was a conceptual breakthrough, another development occurred shortly after that made public-key cryptography significantly more useful: the RSA algorithm7, which was formally published in 1978. This was an algorithm that could implement public-key cryptography with a digital signature, which was useful for authentication purposes. Before the RSA algorithm, the US government had a near-monopoly on state-of-the-art civilian cryptography, but RSA changed this.
By the early 1990s, this dynamic became more tense for multiple reasons. The increasing popularity of the Internet and the rise of e-commerce required encryption to be feasible. Also, America’s economic competitiveness was at risk because technology companies were forced to export versions of their software that had weaker cryptographic protection8. Furthermore, a “cypherpunk”9 movement developed, which consisted of computer scientists and privacy advocates that saw the government’s policies as unreasonable. People within this movement printed encryption algorithms in books and on t-shirts10. Additionally, a cypherpunk named Phil Zimmermann released a free encryption program called PGP (“Pretty Good Privacy”) on the Internet for anyone to use11.
In 1993, a compromise was struck in the form of something called a Clipper Chip12, that was proposed by the Clinton administration (but never progressed beyond the prototype phase). The Clipper Chip was a hardware-based encryption system that could be installed in phones and computers. The encryption was strong, but a backdoor existed. This was for law enforcement to access if they received a court order for decrypting communication that occurred on a specific phone or computer. However, there was a large backlash to the Clipper Chip from civil liberties groups, tech companies, and scientists because they argued that this technology increased the government’s ability for mass surveillance. As a result, the Clipper Chip was abandoned in 199613.
Also in 1996, a major court decision changed what policies the government could enact. In Bernstein v. United States14, the United States District Court for the Northern District of California ruled in favor of Daniel Bernstein, a PhD student at UC-Berkeley. Bernstein created an encryption algorithm called “Snuffle”, and he wanted to publish an academic paper as well as the source code for it. However, the State Department informed him that his source code was considered a munition, so he would need to register as an international arms dealer and obtain an export license to publish his academic paper and source code internationally. Bernstein sued the government, and the Electronic Frontier Foundation helped Bernstein argue that computer source code is a form of speech, so it is protected by the First Amendment. As a result, the government was not able to enforce export controls on code-based encryption technologies. Academic researchers and software developers were also able to discuss their cryptographic research more easily without fear of legal repercussions.
Another relevant development occurred in 1996 as well: the Clinton administration moved the jurisdiction for most commercial encryption from the State Department to the Department of Commerce15. Encryption was reclassified from a munition to a dual-use good, which meant it did not only have potential military applications; it also had commercial use.
In 2000, a second important court case was decided: Junger v. Daley16. This case was decided at the Sixth Circuit (a higher level than the Bernstein case), concerning the export of encryption software outside of the United States. Peter D. Junger, a Case Western professor, wanted to teach a class about computer law, but he was not allowed to discuss technical details about encryption with students from other countries. This was because export restrictions classified encryption software as a munition. Consequently, Junger would not be allowed to have foreign students in his class. The case was ruled in Junger’s favor.
Also in 2000, but before the Junger case was decided, the Clinton administration eliminated most restrictions on the export of retail and open-source cryptographic software17. Since then, the federal government has had less stringent rules about cryptography.
The US government’s previous cryptography policies show that it is difficult for a country to curtail the spread of research (particularly to other countries). Export controls on algorithms are hard to implement, and they can easily fail. Artificial intelligence in 2025 is also different from cryptography in the 20th century in important ways. First, information can spread significantly quicker than it did in the 20th century. For example, a post on X (formerly Twitter) is capable of informing millions of people within several hours about new artificial intelligence research. Also, AI research tends to be open source (via arXiv, academic conferences, etc.). As a result, new ideas in this field move extremely fast, so it would be infeasible to inspect all of them ahead of time unless highly stringent laws were passed.
Additionally, if the US government wanted AI research to be removed from the Internet, it would need platforms and service providers to cooperate with takedowns, but this would not necessarily be timely (or even accepted by the platforms and service providers). Furthermore, because AI research is global and decentralized, problematic research could easily reappear on other platforms as soon as it is removed from any previous platforms it was on. This research could also be transferred and accessed via onion routing (such as the Tor network).
The Bernstein case also sets an important precedent: source code is speech that is protected by the First Amendment. Thus, the court system would likely rule against the government if it were to ban individuals from publishing algorithmic advances that they discovered (however, this has not been tested at the Supreme Court level).
Nuclear Research
The first nuclear bomb was successfully detonated on July 16th, 1945, by the United States at the Trinity test site in New Mexico18. Within a month, the US dropped nuclear bombs on Hiroshima and Nagasaki, leading to the Japanese surrender on September 2nd, 1945, and the end of World War II19.
The nuclear weapons research leading to the first successful test was known as the Manhattan Project20. The US president at the time, Franklin Delano Roosevelt, demanded “absolute secrecy” for the project, and compartmentalization was used as a way of minimizing the number of people who knew the full extent of the research. Individuals could also be sentenced to up to ten years in prison for disclosing secrets about the Manhattan Project. Additionally, the government’s Office of Censorship asked journalists not to discuss topics related to nuclear energy.
Shortly after the Trinity detonation, America passed the Atomic Energy Act of 1946 (also known as the McMahon Act). This created “Restricted Data” as a legal category. Restricted Data included “all data concerning design, manufacture or utilization of atomic weapons… whether created inside or outside government.”21 Furthermore, the concept of “Born Secret” was introduced, which applied to Restricted Data. Born Secret meant that certain information was classified as soon as it was created. While individuals could be prosecuted for divulging information that was classified as Born Secret, that rarely happened. However, the few prosecutions that did occur served as important deterrents.
Several factors made nuclear research secrecy feasible during the early Cold War. First, there was low substitutability. More specifically, a certain equation or schematic (such as the Teller-Ulam design for a hydrogen bomb or the implosion method used in the “Fat Man” bomb) could be a major shortcut to experiments that would take time to gain insights from. Second, there was identifiability: specific numbers (like the exact critical mass of uranium-235 and plutonium-239) and data (like information for isotope separation techniques) could be seen as clear red flags that should be censored. Third, physical facilities were a major bottleneck for nuclear weapon research and design. For example, even with the correct equations, an organization would need enriched materials to make a bomb with. There were various processes that were relatively easy to detect during surveillance: uranium mining, reactor construction, and shipment of specialized equipment.
These elements (low substitutability, identifiability, and physical facilities) do not work as well for AI research secrecy as they did for nuclear research secrecy. Regarding substitutability, numerous AI breakthroughs happen at private companies, and researchers often move to competing companies. While they might have signed nondisclosure agreements with their previous companies, it is likely that some of these researchers use their knowledge to help improve models at the companies they moved to. For identifiability, the boundary between benign and harmful AI systems often depends on the context, so it is difficult to have clear signals that flag a model as harmful (unlike nuclear research where weapons-grade materials and processes provide obvious red flags). On the matter of physical facilities, the infrastructure for AI is more accessible and distributed than it is in the nuclear domain. While a country might need uranium enrichment to build a nuclear bomb, an AI company only needs to use a commercial cloud provider or a cluster of GPUs to train a dangerous model.
International coordination on nuclear weapons, however, provides more useful lessons for AI policy. In 1946, America proposed the Baruch Plan22, which called for the United States to eliminate its nuclear weapons only after international mechanisms were established to prevent all other countries from developing them. The USSR rejected this plan, however, because it argued that the US needed to dismantle its nuclear weapons before the enforcement mechanisms for other countries were in place. In 1949, the USSR successfully tested its first nuclear bomb23.
In 1957, the International Atomic Energy Agency (IAEA) was created by the United Nations24. This agency monitors nuclear weapons programs throughout the world and provides technical assistance for peaceful uses of nuclear energy while verifying that countries do not create nuclear weapons with this knowledge. Furthermore, the IAEA attempts to track every gram of fissile material. It also mandates that states declare all nuclear materials and facilities.
In 1963, the Partial Test Ban Treaty was signed and went into effect25. It banned all nuclear weapons test detonations except for underground ones. It was signed by the US, the USSR, and the UK. As of 2025, there are 126 countries that are parties to the treaty.
In 1968, arguably the most important nuclear weapons treaty was signed: the Treaty on the Non-Proliferation of Nuclear Weapons26 (also known as the Non-Proliferation Treaty (NPT)). This treaty, which went into effect in 1970, required all nuclear states that ratified it to promise eventual disarmament, and all non-nuclear states that ratified it to promise they would not develop nuclear weapons. In exchange, every country that was a party to the treaty would gain access to peaceful nuclear technologies. Currently, there are 190 countries that are parties to the treaty (technically 191 because North Korea’s withdrawal from the treaty in 2003 was never formally accepted by the other parties). The following nuclear states are not parties to this treaty: India, Pakistan, and Israel (the only non-nuclear state that is not a party to this treaty is South Sudan).
In 1996, the Comprehensive Nuclear-Test-Ban Treaty (CTBT) was signed27. The CTBT bans all nuclear explosions, including those for civilian purposes. The treaty has not been implemented, however, because most nuclear powers (America, Russia, China, India, Pakistan, North Korea, and Israel) have not ratified it.
While nuclear weapons still exist, there are Nuclear-Weapon-Free Zones (NWFZs) in parts of the world28. These areas include Antarctica, Latin America, the Caribbean, the South Pacific, Southeast Asia, Central Asia, and most of Africa.
International coordination on nuclear weapons could be applied to AI policy in multiple ways. First, an international body akin to the IAEA that coordinates on AI policy would be beneficial, as it would help prevent (or at the very least, decrease) the race dynamic between countries. Second, a treaty like the NPT that requires all parties agree not to develop certain types of AI capabilities would further decrease the race dynamic. Third, treaties only serve a valuable purpose if they are implemented (the CTBT’s delay highlights this), so there needs to be enough support from the international community to ensure a treaty succeeds.
Importantly, it is harder to verify AI capability gains than nuclear research gains. It requires a large amount of work to convert radioactive material to weapons-grade capabilities, so the IAEA has been able to provide effective oversight. However, it is much harder to be aware of all AI capability gains that are occurring, as these gains are not limited to countries, so they are more difficult to track.
If AI treaties are to succeed, they should have quicker timelines to implement than nuclear treaties. While nuclear treaties have often taken several years to go into effect, this would not be sensible with AI treaties because AI capabilities are increasing so rapidly. Thus, an AI treaty that takes too long to implement would potentially be obsolete, as the technology might have changed significantly from when the treaty was drafted.
Chemical Research
The Chemical Weapons Convention (CWC) was signed in January 1993 and implemented in April 199729. It bans the development, production, stockpiling, use, acquisition, and transfer of chemical weapons. Furthermore, it prohibits research that is specifically aimed at creating or improving chemical weapons. Additionally, the CWC focuses on countries but does not address what the rules should be for non-state actors. Rather, it expects states that sign the treaty to enforce its rules for any non-state actors that might reside in their territories. There are currently 193 parties to this treaty (4 UN states are not parties: Egypt, Israel, North Korea, and South Sudan), and all parties to the treaty are required to destroy any chemical weapons they possess. The Organization for the Prohibition of Chemical Weapons (OPCW), which administers the treaty, verifies the destruction of these weapons.
The OPCW categorizes three classes of chemicals as controlled substances (with each class having separate disclosure rules): Schedule 1 is for chemicals that have few or no uses besides weaponry, Schedule 2 is for chemicals that have small uses outside of weaponry, and Schedule 3 is for chemicals that have major uses outside of weaponry.
The CWC is a useful reference for AI policy because it shows that classification schemes could cater different policies to different AI technologies, depending on how strong their dual-use capabilities are. An example of this would be Schedule 1 AI technologies that are used primarily for military-based applications (such as autonomous lethal weapons), Schedule 2 technologies with more dual-use capabilities (such as AI agents), and Schedule 3 technologies that have mainly commercial capabilities (such as AI for personalized advertising).
Biological Research
Like the Chemical Weapons Convention, the Biological Weapons Convention (BWC) bans the development, production, stockpiling, use, acquisition, and transfer of biological weapons30. It also prohibits research that is for the purpose of creating or improving biological weapons. As with the CWC, the BWC focuses on countries but does not address what the rules should be for non-state actors (and it also expects states that sign the treaty to enforce its rules for any non-state actors that might reside in their territories). The BWC was signed in April 1972, and it went into effect in March 1975. There are currently 189 parties to this treaty.
The BWC was pioneering because it was the first multilateral treaty to ban a whole class of weapons of mass destruction. However, unlike the CWC, the BWC does not have a verification regime. Countries are instead expected, but not required, to engage in domestic monitoring and enforcement. This has resulted in countries having different levels of oversight. For example, the US31 has stringent policies, whereas Sudan32 does not.
While the degree of state monitoring and enforcement varies by party, no country currently acknowledges that it has (or seeks to have) biological weapons. However, certain countries are suspected to have covert bioweapons programs (such as Russia and North Korea)33. Furthermore, the Soviet Union secretly maintained its bioweapons program for two decades after it signed the BWC34. This program, known as Biopreparat, had tens of thousands of people working on it and operated dozens of facilities across the USSR. It worked on weaponizing deadly agents such as anthrax, smallpox, plague, and Marburg virus. This deception was publicly confirmed by Russian President Boris Yeltsin in 199235. Non-state actors have also had bioweapons, despite their respective states’ enforcement against such weapons. A clear example of this is the Aum Shinrikyo cult in Japan36.
Relevantly, there was a treaty before the BWC and CWC called the Geneva Protocol37 (drafted in 1925 and implemented in 1928) that banned the use of chemical and biological weapons during warfare, but it was not as extensive as the BWC and CWC because it did not ban the production, storage or transfer of biological and chemical weapons. This treaty was signed by major powers such as France and Germany, but its limited scope meant countries continued developing chemical weapons throughout the interwar period, ultimately failing to prevent their use in WWII.
When thinking about AI policy, the BWC also shows the limits of restricting research that is difficult to verify. While parties might be expected to operate on an honesty policy, this decreases the effectiveness of such legislation.
Importantly, self-regulation has led to previous biological research restrictions (albeit voluntary ones). A key example is from 1974, when a group of US researchers published a letter that called for researchers to voluntarily pause the use of recombinant-DNA methods (genetic engineering)38. The researchers claimed that the pause was needed so that safety protocols could first be devised for the new technology. This led to the Asilomar Conference on Recombinant DNA in 1975, where scientists agreed upon best practices for engaging in genetic engineering. The scientists also established different protocols for different types of experiments, depending on how potentially hazardous an experiment was. However, the USSR’s Biopreparat had recently launched, and Soviet scientists engaged in recombinant DNA experiments during the requested pause. Thus, the scientist-led pause did not cause all research on recombinant DNA to stop (but it did decrease this research).
The Asilomar Conference on Recombinant DNA is applicable to AI policy because it shows that self-regulation could be beneficial (but there might be defectors without strong verification mechanisms). Also, the AI research community could craft different protocols for different types of research, depending on the level of risk such research created. Additionally, conferences between major AI companies could be useful for many of the key AI researchers in the field to discuss and agree to best practices.
Scientists unilaterally decided on another research pause when they realized that certain types of H5N1 experiments were too high-risk to engage in at the time39. More specifically, they were concerned about the use of gain-of-function research to discover how transmissible the virus could be in mammals. This pause began in January 2012 and was supposed to end 60 days later, but the scientists decided to suspend the research indefinitely (eventually ending the pause after a year) to allow for more time to adequately review the risks versus the benefits of the research. They also wanted to ensure appropriate safety and security measures were in place before the pause ended.
While scientists decided among themselves for an H5N1 gain-of-function pause, the US government declared a federal pause in funding for gain-of-function research in 201440, so policymakers could have time to assess the risks and benefits of this type of research (the pause ended in 2017). Labs that still engaged in gain-of-function research during the pause jeopardized their chances of future federal funding.
Both the scientist-led pause on H5N1 gain-of-function research and the US government’s pause in funding for gain-of-function research provide important lessons that are applicable to AI. The H5N1 pause and Asilomar Conference show that self-regulation could help promote safety and lower the risk of a catastrophe. Also, just as the US government implemented a federal pause on funding for gain-of-function research, it could also pause funding for AI research that it considers dangerous. However, this policy would likely have a limited effect, as most of the leading AI research in America occurs in the private sector.
Policies for Discouraging Dangerous AI Research
Previous research restrictions have had varying degrees of success, but AI policies could be devised that use these interventions as a reference. After engaging in a historical review of relevant research restrictions, this paper concludes that the most effective lever for minimizing dangerous AI research would be an international treaty. This treaty would categorize AI research into various tiers and have different policies for each tier. Furthermore, the treaty would set up an international body for monitoring and enforcement.
A key component of the CWC is its classification system for different types of chemicals. Likewise, an AI treaty could classify AI research by multiple tiers. One method of classification would be by the degree of dual-use capabilities. Tier 1 could be for purposes that have minimal beneficial applications (such as automated cyberattack weapons), Tier 2 could be for dual use (such as AI agents that use tools), and Tier 3 could be mainly for commercial use but capable of misuse (such as personalized advertising). These tiers could determine what regulations are applicable. However, this classification method is imperfect because almost all AI research has dual-use applications, thus limiting the value of this classification system.
A more beneficial classification system would be one that focuses on risk levels. This system could establish different protocols for each risk category without having to gauge intent as thoroughly. For example, research on recursive self-improvement would have stricter policies than research on recommendation systems.
An AI treaty would need verification mechanisms to be effective. It would thus be prudent to have an international body that helps countries coordinate on policies and verifies that all parties are abiding by the treaty. This worked for nuclear research (with the IAEA) and chemical research (with the OPCW), although it would be harder for the field of AI because research is more dispersed in this domain and primarily takes place in the private sector. However, just as parties to the CWC and BWC were expected to enforce rules for non-state actors in their jurisdictions, countries that are parties to an AI treaty could pressure AI companies operating in their territories to emphasize safety as an important component of their research processes.
At the international level, countries that refuse to join the treaty or violate its terms would face coordinated trade restrictions on advanced AI chips and the supercomputing infrastructure required for training frontier AI systems. Additionally, treaty participants could restrict market access for AI developed through prohibited research methods, creating strong economic incentives for companies to avoid dangerous research even if their home countries permit it.
More broadly, just as various nuclear treaties helped decrease the nuclear arms race between America and the Soviet Union41, an AI treaty could decrease race dynamics between countries aiming to have better AI capabilities than their adversaries. However, an AI treaty would likely need to be implemented quicker than previous treaties that restricted dangerous research, so that the treaty would remain relevant once implemented.
Conclusion
While scientific research restrictions are difficult to implement and often controversial, they can serve an important purpose in preventing a catastrophe. Given that artificial intelligence could cause severe harms to humanity, it is worthwhile to seriously consider scientific research restrictions for the most dangerous forms of artificial intelligence research.
Acknowledgements
This paper was written for a 2025 summer research fellowship through the Cambridge Boston Alignment Initiative. The author would like to thank Aaron Scher and Christopher Ackerman for their guidance and feedback, and Josh Thorsteinson for sharing a related manuscript and providing helpful feedback.
References
1. The Enigma of Alan Turing. Central Intelligence Agency. https://www.cia.gov/stories/story/the-enigma-of-alan-turing/
2. Export of cryptography from the United States. Wikipedia. https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States
3. Export of Defense Articles and Services – ITAR | Office of Research Security & Trade Compliance. University of Pittsburgh. https://www.researchsecurity.pitt.edu/export-defense-articles-and-services-itar
4. Deciphering the Cryptography Debate. Brookings. https://www.brookings.edu/articles/deciphering-the-cryptography-debate/
5. Cryptography | NIST. National Institute of Standards and Technology. https://www.nist.gov/cryptography
6. New Directions in Cryptography. IEEE Transactions on Information Theory (PDF hosted by Stanford). https://www-ee.stanford.edu/~hellman/publications/24.pdf
7. A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Author-hosted PDF (Rivest, Shamir, Adleman). https://people.csail.mit.edu/rivest/Rsapaper.pdf
8. Tough on Crime or Tough on Competition? How Federal Law Enforcement’s Push for Surveillance-Friendly Technology Poses a Substantial Threat to the U.S. Software Industry’s Competitiveness. University of Florida Journal of Technology Law & Policy (PDF). https://scholarship.law.ufl.edu/cgi/viewcontent.cgi?article=1230&context=jtlp
9. Cypherpunk | Internet Policy Review. Internet Policy Review. https://policyreview.info/glossary/cypherpunk
10. Crypto Wars. GitHub Pages. https://uwillnvrknow.github.io/deCryptMe/pages/cryptowars3.html
11. Phil Zimmermann’s Home Page. Phil Zimmermann. https://philzimmermann.com/EN/background/index.html
12. The Clipper Chip. Electronic Privacy Information Center. https://archive.epic.org/crypto/clipper/
13. Sinking the Clipper Chip. Discourse Magazine. https://www.discoursemagazine.com/p/sinking-the-clipper-chip
14. Bernstein v. US Dept. of State, 945 F. Supp. 1279 (N.D. Cal. 1996) :: Justia. Justia. https://law.justia.com/cases/federal/district-courts/FSupp/945/1279/1457799/
15. Encryption: Administration Opens The Door To Domestic Regulation As Congress Debates Privacy, Commercial And Security Concerns. Wiley. https://www.wiley.law/newsletter-5
16. JUNGER v. DALEY (2000). FindLaw. https://caselaw.findlaw.com/us-6th-circuit/1074126.html
17. U.S. Removes More Limits on Encryption. The New York Times. https://www.nytimes.com/2000/01/13/business/us-removes-more-limits-on-encryption.html
18. Trinity: World’s First Nuclear Test. Air Force Nuclear Weapons Center. https://www.afnwc.af.mil/About-Us/History/Trinity-Nuclear-Test/
19. The Atomic Bombs That Ended World War 2 | Imperial War Museums. Imperial War Museums. https://www.iwm.org.uk/history/the-atomic-bombs-that-ended-the-second-world-war
20. Manhattan Project. Wikipedia. https://en.wikipedia.org/wiki/Manhattan_Project
21. Atomic Energy Act of 1946. U.S. Department of Energy (PDF). https://doe-humangenomeproject.ornl.gov/wp-content/uploads/2023/02/Atomic_Energy_Act_of_1946.pdf
22. The Acheson-Lilienthal & Baruch Plans, 1946. U.S. Department of State – Office of the Historian. https://history.state.gov/milestones/1945-1952/baruch-plans
23. RDS-1. Wikipedia. https://en.wikipedia.org/wiki/RDS-1
24. History | IAEA. International Atomic Energy Agency. https://www.iaea.org/about/overview/history
25. Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water. United Nations Treaty Collection. https://treaties.un.org/pages/showDetails.aspx?objid=08000002801313d9
26. Treaty on the Non-Proliferation of Nuclear Weapons (NPT) – UNODA. United Nations Office for Disarmament Affairs. https://disarmament.unoda.org/wmd/nuclear/npt/
27. Comprehensive Nuclear-Test-Ban Treaty (CTBT) – UNODA. United Nations Office for Disarmament Affairs. https://disarmament.unoda.org/wmd/nuclear/ctbt/
28. Nuclear-Weapon-Free Zones – UNODA. United Nations Office for Disarmament Affairs. https://disarmament.unoda.org/wmd/nuclear/nwfz/
29. Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction. United Nations Treaty Collection. https://treaties.un.org/pages/viewdetails.aspx?src=treaty&mtdsg_no=xxvi-3&chapter=26
30. UNODA Treaties Database — Biological Weapons Convention (BWC). United Nations Office for Disarmament Affairs. https://treaties.unoda.org/t/bwc
31. United States of America — BWC Implementation (UNIDIR). BWC Implementation Support Unit. https://bwcimplementation.org/states/united-states-america
32. Sudan — BWC Implementation (UNIDIR). BWC Implementation Support Unit. https://bwcimplementation.org/states/sudan
33. The State of Compliance with Weapons of Mass Destruction-Related Treaties. Council on Strategic Risks. https://councilonstrategicrisks.org/2024/04/19/the-state-of-compliance-with-weapons-of-mass-destruction-related-treaties/
34. Biopreparat. Wikipedia. https://en.wikipedia.org/wiki/Biopreparat
35. Biological Weapons — Russian / Soviet Nuclear Forces. Federation of American Scientists. https://nuke.fas.org/guide/russia/cbw/bw.htm
36. Aum Shinrikyo: Once and Future Threat? | Office of Justice Programs. U.S. Office of Justice Programs. https://www.ojp.gov/ncjrs/virtual-library/abstracts/aum-shinrikyo-once-and-future-threat
37. 1925 Geneva Protocol – UNODA. United Nations Office for Disarmament Affairs. https://disarmament.unoda.org/wmd/bio/1925-geneva-protocol/
38. Asilomar and recombinant DNA – NobelPrize.org. NobelPrize.org. https://www.nobelprize.org/prizes/chemistry/1980/berg/article/
39. H5N1 Researchers Announce End of Research Moratorium. Science. https://www.science.org/content/article/h5n1-researchers-announce-end-research-moratorium
40. Ban on gain-of-function studies ends. (PMC7128689). PubMed Central. https://pmc.ncbi.nlm.nih.gov/articles/PMC7128689/
41. Nuclear arms race. Wikipedia. https://en.wikipedia.org/wiki/Nuclear_arms_race


