Jump to content
  • ChatGPT under fire by FTC for data leak and inaccuracy

    alf9872000

    • 347 views
    • 2 minutes
     Share


    • 347 views
    • 2 minutes

    The U.S. Federal Trade Commission has opened an investigation into OpenAI, the firm that developed ChatGPT, to see if it violated consumer protection laws by using its chatbot to disseminate misleading material and scrape public data.

     

    In a 20-page letter, the government asked OpenAI for specific details on its AI technology, products, clients, privacy policies, and data security measures.

     

    The action against San Francisco-based OpenAI represents the largest regulatory threat to date to a startup that launched the generative artificial intelligence craze, captivating customers and companies but arousing doubts about its potential dangers.

     

    GettyImages-1462188043.jpeg

    FTC ChatGPT investigation

    FTC ChatGPT investigation target spreading false information

    The company that created ChatGPT, OpenAI, is under investigation by the U.S. Federal Trade Commission to discover if it violated consumer protection rules by using its chatbot to spread false information and scrape open data.

     

    The government requested precise information from OpenAI in a 20-page letter on its AI technology, goods, clients, privacy policies, and data security procedures.

     

    The spokeswoman for the FTC chose not to comment on the investigation, which was first reported on Thursday by the Washington Post.

     

    The FTC has also requested that OpenAI make public the data it used to train the big language models that serve as the foundation for services like ChatGPT, but OpenAI has so far rejected. One among the writers suing OpenAI over allegations that ChatGPT's LLM was trained on data including their works is the American comedian Sarah Silverman.

     

    The FTC has asked OpenAI to disclose whether it received the data directly from the internet (via "scraping") or by buying it from other parties. Additionally, it requests details on any measures made to ensure that personal information was not included in the training data as well as the identities of the websites from where the data was obtained.

     

    Poor governance inside AI firms, according to Enza Iannopollo, principal analyst at research company Forrester, could be a "disaster" for customers and the companies themselves, opening them up to investigations and fines.

     

    “As long as large language models (LLMs) remain opaque and rely largely on scraped data for training, the risks of privacy abuses and harm to individuals will continue to grow,” she said.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...