Potential Use of Generative AI for Bioweapons Creation Deemed Slightly Likely

Artificial intelligence, particularly generative models like GPT-4, are growing in popularity for their applications in research and content generation. While this technology is often leveraged for benign purposes, there is a risk that it could be used for nefarious ends, such as bioweapon production. A group of scholars and specialists investigated the ease with which GPT-4 could be employed in the creation of biological weapons. Although they concluded that the likelihood is low, the risk is not entirely absent. Concerningly, this insight arrives in the wake of a partnership between OpenAI and the United States Department of Defense.

Understanding the difference between chatbots and large language models (LLMs) is crucial. A chatbot, like ChatGPT, serves as the user interface where text is input and results are displayed. GPT-4, on the other hand, functions as the underlying engine that processes these prompts and sends the output to the chatbot. Users subscribing to ChatGPT Plus gain access to this more complex GPT-4 model, while non-paying users’ interactions are based on the more limited GPT-3.5 model.

Analysing GPT-4’s Potential for Bioweaponry Assistance

In response to concerns about AI misuse, the Biden Administration released an executive order directing the Department of Energy to prevent AI technologies from facilitating the creation of nuclear, biological, or chemical weapons. Ahead of such measures, OpenAI took proactive steps by establishing a safety team tasked with addressing these kinds of threats.

A comprehensive study involved enlisting 100 individuals, including experts in biology and biology students, tasked with testing GPT-4’s ability to provide instructions on bioweapon manufacturing. Half of the participants were granted basic internet access while the other half had a special, unrestricted version of GPT-4 and internet access. Their objective was to probe for vulnerabilities in the AI, hoping to coax it into revealing methods to create lethal weapons, including, for instance, the synthesis of the ebola virus and the development of targeted biological weapons.

The Outcome of the Research

The findings raise some concerns. The group with internet access alone managed to uncover some bioweapon creation methods. On the other hand, the participants working with GPT-4 demonstrated improved “accuracy and completeness” in their findings, a development that is indeed alarming. According to the researchers, GPT-4’s assistance only marginally enhanced the capability to obtain information useful for bioterrorism.

This research is crucial in today’s technological climate, with AI playing an increasingly central role. It is imperative that AI companies engage in rigorous analysis to mitigate potential threats. In an era where AI-generated art, music, and literature are commonplace, ensuring that this technology does not contribute to harm or the loss of human life is paramount.