OPINION | Views expressed in this article reflect the author's opinion.

The current artificial intelligence systems are riddled with problems of liberal bias.

For example, OpenAI’s ChatGPT system claims people like President Donald Trump and Elon Musk are “controversial” figures while President Joe Biden and Bill Gates are not.

Musk is reportedly assembling a team of artificial intelligence experts and taking matters into his own hands to build an alternative to ChatGPT.

The founder of Gab, a right-wing social media platform, is also reportedly working on an AI software with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”

Musk says he’s been waging a battle for several months against “woke” artificial intelligence, which include manipulative systems trained to lie.

“The danger of training AI to be woke – in other words, lie – is deadly,” Musk wrote to social media.

Advanced AI technology poses “profound risks to society and humanity,” Musk argued in a letter he signed. AI could be used to “flood our information channels with propaganda and untruth.”

For example, in February, Musk noted that ChatGPT lists former President Trump and Musk himself as “controversial” figures while President Biden and Bill Gates are not.

As a result, Musk has called for a six-month pause in the development of next-generation AI systems.

More on this story via Fox News:

— Advertisement —

OpenAI’s GPT-4 is the latest deep learning model from the company that “exhibits human-level performance on various professional and academic benchmarks,” according to the lab. Musk has repeatedly criticized the system, including for its apparent liberal bias, such as the system writing a favorable poem about President Biden but unwilling to do the same for former President Donald Trump.

The danger of training AI to be woke – in other words, lie – is deadly,” Musk tweeted at OpenAI CEO Sam Altman in December.

Meanwhile, a professor in New Zealand carried out an experiment on a chatbot with a series of quizzes to determine if there was any bias. Across more than a dozen tests, David Rozado found consistent “liberal,” “progressive” and “Democratic” political bias.

Rozado, a professor at Te Pūkenga-New Zealand Institute of Skills and Technology, began another experiment dubbed “RightWingGPT,” which trained the system to produce answers with a conservative leaning.