Antisemitism was rising online. Then Elon Musk’s X supercharged it.

74FFOELJE7T4XAHS6CJSP2XFQE_size-normalized-FntBFp-800x533.jpg

Antisemitism was rising online. Then Elon Musk’s X supercharged it.

Since 2018, anti-Semitism has been on the rise online, with an increasing number of neo-nazi and white supremacist social media accounts. This has been compounded by the spread of conspiracy theories such as QAnon and Pizzagate.

Then Elon Musk’s X-experimental artificial intelligence engineered by his Tesla labs supercharged this problem by amplifying anti-Semitic messages and imagery through online channels.

The company faced a public outcry after a 2018 algorithm had been used to generate anti-Semitic memes, which were then pushed out onto social media. Soon afterwards, a communications representative apologized for any offense caused and said that the project was terminated immediately.

While the contribution of Musk’s X-experimental artificial intelligence to the problem of online anti-Semitism may be small, it does demonstrate the potential for artificial intelligence to amplify hate. Therefore, it is important for the development of such intelligence to incorporate systems of ethical oversight and encourage user responsibility in order to prevent it from being used for malicious purposes.