Jump to content
  • AI Is Your Coworker Now. Can You Trust It?


    Karlston

    • 409 views
    • 7 minutes
     Share


    • 409 views
    • 7 minutes

    Generative AI tools such as OpenAI’s ChatGPT and Microsoft’s Copilot are becoming part of everyday business life. But they come with privacy and security considerations you should know about.

    Generative AI tools such as OpenAI’s ChatGPT and Microsoft’s Copilot are rapidly evolving, fueling concerns that the technology could open the door to multiple privacy and security issues, particularly in the workplace.

     

    In May, privacy campaigners dubbed Microsoft’s new Recall tool a potential “privacy nightmare” due to its ability to take screenshots of your laptop every few seconds. The feature has caught the attention of UK regulator the Information Commissioner’s Office, which is asking Microsoft to reveal more about the safety of the product launching soon in its Copilot+ PCs.

     

    Concerns are also mounting over OpenAI’s ChatGPT, which has demonstrated screenshotting abilities in its soon-to-launch macOS app that privacy experts say could result in the capture of sensitive data.

     

    The US House of Representatives has banned the use of Microsoft’s Copilot among staff members after it was deemed by the Office of Cybersecurity to be a risk to users due to “the threat of leaking House data to non-House approved cloud services.”

     

    Meanwhile, market analyst Gartner has cautioned that “using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally.” And last month, Google was forced to make adjustments to its new search feature, AI Overviews, after screenshots of bizarre and misleading answers to queries went viral.

    Overexposed

    For those using generative AI at work, one of the biggest challenges is the risk of inadvertently exposing sensitive data. Most generative AI systems are “essentially big sponges,” says Camden Woollven, group head of AI at risk management firm GRC International Group. “They soak up huge amounts of information from the internet to train their language models.”

     

    AI companies are “hungry for data to train their models,” and are “seemingly making it behaviorally attractive” to do so, says Steve Elcock, CEO and founder at software firm Elementsuite. This vast amount of data collection means there’s the potential for sensitive information to be put “into somebody else’s ecosystem,” says Jeff Watkins, chief product and technology officer at digital consultancy xDesign. “It could also later be extracted through clever prompting.”

     

    At the same time, there’s the threat of AI systems themselves being targeted by hackers. “Theoretically, if an attacker managed to gain access to the large language model (LLM) that powers a company's AI tools, they could siphon off sensitive data, plant false or misleading outputs, or use the AI to spread malware,” says Woollven.

     

     

    Consumer-grade AI tools can create obvious risks. However, an increasing number of potential issues are arising with “proprietary” AI offerings broadly deemed safe for work such as Microsoft Copilot, says Phil Robinson, principal consultant at security consultancy Prism Infosec.

     

    “This could theoretically be used to look at sensitive data if access privileges have not been locked down. We could see employees asking to see pay scales, M&A activity, or documents containing credentials, which could then be leaked or sold.”

     

    Another concern centers around AI tools that could be used to monitor staff, potentially infringing their privacy. Microsoft’s Recall feature states that “your snapshots are yours; they stay locally on your PC” and “you are always in control with privacy you can trust.”

     

    Yet “it doesn’t seem very long before this technology could be used for monitoring employees,” says Elcock.

    Self-Censorship

    Generative AI does pose several potential risks, but there are steps businesses and individual employees can take to improve privacy and security. First, do not put confidential information into a prompt for a publicly available tool such as ChatGPT or Google’s Gemini, says Lisa Avvocato, vice president of marketing and community at data firm Sama.

     

    When crafting a prompt, be generic to avoid sharing too much. “Ask, ‘Write a proposal template for budget expenditure,’ not ‘Here is my budget, write a proposal for expenditure on a sensitive project,’” she says. “Use AI as your first draft, then layer in the sensitive information you need to include.”

     

    If you use it for research, avoid issues such as those seen with Google’s AI Overviews by validating what it provides, says Avvocato. “Ask it to provide references and links to its sources. If you ask AI to write code, you still need to review it, rather than assuming it’s good to go.”

     

    Microsoft has itself stated that Copilot needs to be configured correctly and the “least privilege”—the concept that users should only have access to the information they need—should be applied. This is “a crucial point,” says Prism Infosec’s Robinson. “Organizations must lay the groundwork for these systems and not just trust the technology and assume everything will be OK.”

     

    It’s also worth noting that ChatGPT uses the data you share to train its models, unless you turn it off in the settings or use the enterprise version.

    List of Assurances

    The firms integrating generative AI into their products say they’re doing everything they can to protect security and privacy. Microsoft is keen to outline security and privacy considerations in its Recall product and the ability to control the feature in Settings > Privacy & security > Recall & snapshots.

     

    Google says generative AI in Workspace “does not change our foundational privacy protections for giving users choice and control over their data,” and stipulates that information is not used for advertising.

     

    OpenAI reiterates how it maintains security and privacy in its products, while enterprise versions are available with extra controls. “We want our AI models to learn about the world, not private individuals—and we take steps to protect people’s data and privacy,” an OpenAI spokesperson tells WIRED.

     

    OpenAI says it offers ways to control how data is used, including self-service tools to access, export, and delete personal information, as well as the ability to opt out of use of content to improve its models. ChatGPT Team, ChatGPT Enterprise, and its API are not trained on data or conversations, and its models don’t learn from usage by default, according to the company.

     

    Either way, it looks like your AI coworker is here to stay. As these systems become more sophisticated and omnipresent in the workplace, the risks are only going to intensify, says Woollven. “We're already seeing the emergence of multimodal AI such as GPT-4o that can analyze and generate images, audio, and video. So now it's not just text-based data that companies need to worry about safeguarding.”

     

    With this in mind, people—and businesses—need to get in the mindset of treating AI like any other third-party service, says Woollven. “Don't share anything you wouldn't want publicly broadcasted.”

     

    Source

     

    Hope you enjoyed this news post.

    Thank you for appreciating my time and effort posting news every single day for many years.

    2023: Over 5,800 news posts | 2024 (till end of May): Nearly 2,400 news posts


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...