According to OpenAI, people with state support used its AI for misinformation.

globalnews99.com
4 Min Read
According to OpenAI

California, San Francisco:  According to OpenAI, the business that created ChatGPT, OpenAI, announced on Thursday that it had stopped five attempts in the last three months to utilize its artificial intelligence models for misleading purposes.

According to a blog post by OpenAI, the interrupted campaigns came from an Israeli private enterprise, China, Russia, and Iran.

According to OpenAI,

The threat actors made an effort to use OpenAI’s potent language models for debugging bot and website code, as well as producing articles, comments, and social media profiles.

These businesses “do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” according to the CEO-led organization.

According to OpenAI
According to OpenAI

Concerns that apps like ChatGPT or picture generator Dall-E can produce false content quickly and in large quantities have companies like OpenAI under intense scrutiny.

To combat fake news, an IT specialist advocates for early AI instruction.

This is particularly concerning because there are significant elections coming up all across the world, and nations like Iran, China, and Russia are known to deploy clandestine social media campaigns to inflame tensions before election day.

One interrupted operation, known as “Bad Grammar,” was a previously unknown Russian campaign that was directed towards the US, the USSR, Moldova, and Ukraine.

It generated succinct political remarks in both Russian and English for Telegram using OpenAI models and technologies.

OpenAI’s artificial intelligence was used by the well-known Russian “Doppelganger” operation to produce comments in English, French, German, Italian, and Polish on platforms such as X.

The Chinese “Spamouflage” influence operation was also taken down by OpenAI. It used its models improperly to do social media research, produce multilingual copy, and debug code for websites like the previously undisclosed revealscum.com.

Because it used OpenAI to generate stories, headlines, and other content for Iranian state-affiliated websites, the “International Union of Virtual Media,” an Iranian organization, was forced to disband.

According to OpenAI
According to OpenAI

Furthermore, OpenAI upended STOIC, a for-profit Israeli business that seemed to employ its models to produce content for Instagram, Facebook, Twitter, and other connected websites.

Facebook’s owner, Meta, also brought attention to this initiative earlier this week.

The operations published content on Facebook, Twitter, Telegram, and Medium, among other sites, “but none managed to engage a substantial audience.” According to OpenAI,

These businesses “do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” according to the CEO-led organization.

Concerns that apps like ChatGPT or picture generator Dall-E can produce false content quickly and in large quantities have companies like OpenAI, according to OpenAI, under intense scrutiny.

This is particularly concerning because there are significant elections coming up all across the world, and nations like Iran, China, and Russia are known to deploy clandestine social media campaigns to inflame tensions before election day.

 

 

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *