Jump to content
  • ChatGPT creates mostly insecure code, but won't tell you unless you ask

    alf9872000

    • 1 comment
    • 512 views
    • 5 minutes
     Share


    • 1 comment
    • 512 views
    • 5 minutes

    Boffins warn of risks from chatbot model that, Dunning–Kruger style, fails to catch its own bad advice

    ChatGPT, OpenAI's large language model for chatbots, not only produces mostly insecure code but also fails to alert users to its inadequacies despite being capable of pointing out its shortcomings.

     

    Amid the frenzy of academic interest in the possibilities and limitations of large language models, four researchers affiliated with Université du Québec, in Canada, have delved into the security of code generated by ChatGPT, the non-intelligent, text-regurgitating bot from OpenAI.

     

    In a pre-press paper titled, "How Secure is Code Generated by ChatGPT?" computer scientists Raphaël Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara answer the question with research that can be summarized as "not very."

     

    "The results were worrisome," the authors state in their paper. "We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts. In fact, when prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not."

     

    The four authors offered that conclusion follows after asking ChatGPT to generate 21 programs and scripts, using a spread of languages: C, C++, Python, and Java.

     

    The programming tasks put to ChatGPT were chosen so that each would illustrate a specific security vulnerability, such as memory corruption, denial of service, and flaws related to deserialization and improperly implemented cryptography.

     

    The first program, for example, was a C++ FTP server for sharing files in a public directory. And the code that ChatGPT produced included no input sanitization, which leaves the software exposed to a path traversal vulnerability.

     

    In all, ChatGPT managed to generate just five secure programs out of 21 on its first attempt. After further prompting to correct its missteps, the large language model managed to produce seven more secure apps – though that's "secure" only as it pertains to the specific vulnerability being evaluated. It's not an assertion that the final code is free of any other exploitable condition.

     

    The researchers' findings echo similar though not identical evaluations of GitHub's Copilot, another LLM based on the GPT-3 family of models (and recently upgraded to GPT-4) that has been tuned specifically for code generation. Other studies have looked at ChatGPT errors more generally. At the same time, these models are also being used to help identify security issues.

     

    The academics observe in their paper that part of the problem appears to arise from ChatGPT not assuming an adversarial model of code execution. The model, they say, "repeatedly informed us that security problems can be circumvented simply by 'not feeding an invalid input' to the vulnerable program it has created."

     

    Yet, they say, "ChatGPT seems aware of – and indeed readily admits – the presence of critical vulnerabilities in the code it suggests." It just doesn't say anything unless asked to evaluate the security of its own code suggestions.

     

    "Obviously, it's an algorithm. It doesn't know anything, but it can recognize insecure behavior," Raphaël Khoury, a professor of computer science and engineering at the Université du Québec en Outaouais and one of the paper's co-authors, told The Register

    Initially, ChatGPT's response to security concerns was to recommend only using valid inputs – something of a non-starter in the real world. It was only afterward, when prompted to remediate problems, that the AI model provided useful guidance.

     

    That's not ideal, the authors suggest, because knowing which questions to ask presupposes familiarity with specific vulnerabilities and coding techniques.

     

    In other words, if you know the right prompt to get ChatGPT to fix a vulnerability, you probably already understand how to address it.

     

    The authors also point out that there's ethical inconsistency in the fact that ChatGPT will refuse to create attack code but will create vulnerable code.

     

    They cite a Java deserialization vulnerability example in which "the chatbot generated vulnerable code, and provided advice on how to make it more secure, but stated it was unable to create the more secure version of the code."

     

    Khoury contends that ChatGPT in its current form is a risk, which isn't to say there are no valid uses for an erratic, underperforming AI helper. "We have actually already seen students use this, and programmers will use this in the wild," he said. "So having a tool that generates insecure code is really dangerous. We need to make students aware that if code is generated with this type of tool, it very well might be insecure."

     

    "One thing that surprised me was when we asked [ChatGPT] to generate the same task – the same type of program in different languages – sometimes, for one language, it would be secure and for a different one, it would be vulnerable. Because this type of language model is a bit of a black box, I really don't have a good explanation or a theory about this."

     

    Source

    • Like 2

    User Feedback

    Recommended Comments



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...