OpenAI’s latest artificial intelligence model, GPT-4o, has been assessed to pose a “medium risk” in influencing political opinions through generated text. This revelation comes from the “System Card” report published by OpenAI on August 8, which addresses safety concerns associated with GPT-4o, the engine behind OpenAI’s ChatGPT service.
While the model is considered secure in areas like cybersecurity, biological threats, and model autonomy, it is deemed to have a medium risk in textual persuasion, particularly in influencing political opinions. The evaluation focused on the model’s ability to sway political opinions rather than detecting inherent biases in its output.
Interestingly, in three out of twelve cases, GPT-4o exhibited a higher degree of persuasiveness than professional human writers. This highlights the potential of AI models like GPT-4o to generate engaging content but also raises concerns about their impact on political discourse.
In a related development, OpenAI’s co-founder John Schulman has left the company to join its rival, Anthropic. This move leaves only three of OpenAI’s eleven founders still with the company, including CEO Sam Altman, president Greg Brockman, and head of language models Wojciech Zaremba.
As AI continues to advance in its capabilities, the potential for influencing political opinions through generated text underscores the need for careful monitoring and regulation in the development and deployment of such technologies. The implications of AI models like GPT-4o on political discourse and public opinion are significant and warrant close attention from both developers and policymakers.