April 17, 2024 1:23 pm
Search

Stats

2 Shares

Sponsored

NEWS

US Regulator Launches Investigation into ChatGPT, Uncovering Disturbing ‘Bad Content’

The US Federal Trade Commission is investigating OpenAI to determine if its hugely popular ChatGPT app harms consumers by generating false information and whether its technology mishandles user data.

Microsoft-backed OpenAI was notified of the investigation in a 20-page questionnaire in which the company is asked to describe incidents in which users were falsely disparaged, and share any company efforts to ensure this does not happen again.

The investigation by the US regulator was first reported by The Washington Post.

Powerful AI Under Investigation

The US Federal Trade Commission (FTC) has set its sights on OpenAI, probing into the potential harms caused by its popular ChatGPT app. The investigation seeks to determine if the app is generating false information and whether user data is being mishandled. The FTC notified Microsoft-backed OpenAI through a detailed, 20-page questionnaire, which aims to gather information about incidents where users were falsely disparaged and OpenAI’s efforts to prevent such occurrences in the future.

News of the investigation was broken by The Washington Post.

OpenAI’s release of ChatGPT last November stunned the world as it displayed the power of large language models (LLMs). These models are a type of artificial intelligence known as generative AI, capable of producing human-like content within seconds. However, alongside the awe-inspiring capabilities of the technology, reports started emerging about the models generating offensive, false, and bizarre content, earning them the label “hallucinations”.

  1. Concerns Over Libellous Output

    FTC Chair Lina Khan raised concerns about ChatGPT’s potential to produce libellous content during a congressional committee hearing. Although she did not directly mention the investigation, Khan highlighted instances where sensitive information had been disclosed in response to queries and spoke about the emergence of defamatory and blatantly false statements. The FTC aims to combat fraud and deception of this nature.

  2. Focus on User Harm and Data Usage

    According to the questionnaire, the FTC’s investigation primarily centers around how ChatGPT’s output may harm users. Additionally, the probe delves into OpenAI’s handling of private data for the development of its flagship GPT-4 model. OpenAI licenses its technology to various companies, enabling them to access the model for their own purposes.

Possible Future Actions

It is important to note that an FTC investigation does not inevitably result in further legal action. If the regulator is satisfied with OpenAI’s responses, the case may be closed. However, if the FTC identifies any illegal or unsafe practices, it can demand remedial action and potentially initiate a lawsuit. As of now, neither OpenAI nor the FTC have responded to requests for comment.

Source: AFP

Get Mailed when an update on this breaks

Follow Trennews

Trennews

Reporting

Subscribe
Notify of
guest
0 Responses
Inline Feedbacks
View all comments

Sponsored

TrenApps

…activities to grow your tokens

General

Dizcuz

Leave Comments on Posts

TLSC

Trentimes Link Share Club. Share links in bulk

Trenconnect

Complete 3 tasks on WhatsApp and get paid

Trennews

Share news and earn in cycles

Farming & Interest

cropped 1folio.net

1folio

Farm Trencoins and grow them

Content

PostPad

Create and publish content

Add Sequence

For creating courses and tutorials

Your Referral link

Share & Earn!

Gat Paid & Cycle N1000 when you reshare any news with your unique link given you. Join Trentimes Share & Earn

Login or Sign Up to Support Trennews

Upon Registration, You will be credited with 27TTC

Login | Register

Join Trentimes and Earn Money for Expressing Yourself!

Create a voice online and make money. Trentimes gives you a chance to share your thoughts and ideas, and even get paid for them. Be part of our community today and start earning from your content. Sign up now!