Jump to content
  • Your Boss’s Spyware Could Train AI to Replace You

    aum

    • 1 comment
    • 438 views
    • 8 minutes
     Share


    • 1 comment
    • 438 views
    • 8 minutes

    Corporations are using software to monitor employees on a large scale. Some experts fear the data these tools collect could be used to automate people out of their jobs.

     

    YOU’VE PROBABLY HEARD the story: A young buck comes into a new job full of confidence, and the weathered older worker has to show them the ropes—only to find out they’ll be unemployed once the new employee is up to speed. This has been happening among humans for a long time—but it may soon start happening between humans and artificial intelligence.

     

    Countless headlines over the years have warned that automation isn’t just coming for blue-collar jobs, but that AI would threaten scores of white-collar jobs as well. AI tools are becoming capable of automating tasks and sometimes entire jobs in the corporate world, especially when those jobs are repetitive and rely on processing data. This could affect everyone from workers at banks and insurance companies to paralegals and beyond.

    Carl Frey, an economist at Oxford University, coauthored a landmark study in 2013 that claimed AI could threaten nearly 50 percent of US jobs in the coming decades. Frey says that he doesn’t think new AI tools like ChatGPT are going to automate jobs in this way because they still require human involvement and are often unreliable. Still, many of the underlying factors that were outlined in that paper remain pertinent today.

     

    Considering the rapid pace at which AI is advancing, it’s hard to predict how it could soon be utilized and what it will be capable of.

     

    Then there’s the issue of how it’s being incorporated into daily work and how it’s being trained. Enter corporate spyware, invasive monitoring apps that allow bosses to keep close tabs on everything their employees are doing—collecting reams of data that could come into play here in interesting ways. Corporations, which are monitoring their employees on a large scale, are now having workers utilize AI tools more frequently, and many questions remain regarding how the many AI tools that are currently being developed are being trained.

     

    Put all of this together and there’s the potential that companies could use data they’ve harvested from workers—by monitoring them and having them interact with AI that can learn from them—to develop new AI programs that could actually replace them. If your boss can figure out exactly how you do your job, and an AI program is learning from the data you’re producing, then eventually your boss might be able to just have the program do the job instead.

     

    “When it comes to monitoring workflows, I do think that’s going to be a way we automate a lot of this stuff,” Frey says. “What you might be able to do is take some of those foundational models and train them on some of the data you have internally and fine-tune them, or you could train a model from scratch just with your internal data.”

     

    David Autor, a professor of economics at MIT, says he also thinks AI could be trained in this way. While there is a lot of employee surveillance happening in the corporate world, and some of the data that’s collected from it could be used to help train AI programs, simply learning from how people are interacting with AI tools throughout the workday could help train those programs to replace workers.

     

    “They will learn from the workflow in which they’re engaged,” Autor says. “Often people will be in the process of working with a tool, and the tool will be learning from that interaction.”

     

    Whether you’re training an AI tool directly by interacting with it throughout the day, or the data you’re producing while you work is simply being used to create an AI program that can do the work you’re doing, there are multiple ways in which a worker could inadvertently end up training an AI program to replace them. Even if the program doesn’t end up being incredibly effective, a lot of companies might be happy with an AI program that’s good enough because it doesn’t require a salary and benefits.

     

    “I think there are a lot of discretionary white-collar jobs where you’re kind of using a mixture of hard information and soft information and trying to make advanced decisions,” Autor says. “People aren’t that good at that, machines aren’t that good at that, but probably machines can be pretty much as good as people.”

     

    Autor says he doesn’t see a “labor market apocalypse” coming. Many workers won’t be entirely replaced but will simply have their jobs changed by AI, Autor says, while some workers will certainly be made redundant by advancements in AI. The problem there, he says, is what happens to those workers after they’re no longer able to find a well-paying job with the education and skill sets they have.

     

    “It’s not that we’re going to run out of work. It’s much more that people are doing something they’re good at, and that thing goes away. And then they end up doing a kind of generic activity that everybody’s good at, which means it pays very little—food service, cleaning, security, vehicle driving,” Autor says. “These are low-paying activities.”

     

    Once someone’s automated out of a well-paying job, they can end up slipping through the cracks. Autor says we’ve seen this happen in the past.

     

    “The hollowing out of manufacturing and office work over the past 40 years has definitely put downward pressure on the wages of people who would do that type of work, and it’s not because they’re doing it now at a lower rate of pay. It’s because they’re not doing it,” Autor says.

     

    Frey says politicians will need to offer solutions to those who fall through the cracks to prevent the destabilization of the economy and society. That would likely include offering social safety net programs to those affected. Frey has written extensively on the effects of the first Industrial Revolution, and he says there are lessons to be learned there. In Britain, for example, there was a program called the Poor Laws, where people who were harmed by automation were given financial relief.

     

    “What you see back then is a lot of social unrest. Wages are stagnant or falling for a large part of the population. You have riots,” Frey says. “If you look at the places where the Poor Laws were more generous, there was less social unrest and less upheaval. Using welfare systems to compensate people who lose out is something we’ve done for a long time and should continue to do.”

     

    Many people would also benefit from being retrained for other work, but Autor says the US has never been very good at retraining people, so there’d have to be some work done to create effective retraining programs. He says technology might actually be able to help there because people could be retrained using helpful new digital tools.

     

    There was a lot of hype surrounding ChatGPT and similar AI tools when they came out. That hype has since died down a bit, suggesting to some that maybe these tools won’t be as useful as they were promised to be. Perhaps they won’t be taking everybody’s jobs. However, at the rate at which AI is advancing, there’s no saying where things could be in five to 10 years—or even next year.

     

    Vincent Conitzer, a professor of computer science at Carnegie Mellon University, says people shouldn’t underestimate what these AI tools may soon be capable of. They may be somewhat limited in their use now, but that could change relatively rapidly and end up being as disruptive as some have warned it could be.

     

    “I worry about this being a ‘boiling frog’ kind of scenario, where we see amazing advances in AI but then don’t immediately see them take over people’s jobs, and [people] conclude there wasn’t all that much to worry about, and we accept the new technology as the new normal but not all that impressive after all,” Conitzer says. “Meanwhile, gradually but quickly, the world and the job market do adjust to the new technologies in complex ways, and at some point we realize large societal problems have emerged.”

     

    Source


    User Feedback

    Recommended Comments

    The biggest problem IMO is the blind acceptance by individuals and organisations that AI like ChatGPT is perfect and so they start adding it to all their products. So much of that is driven not by trying to improve the products but by FoMO (Fear Of Missing Out) because everyone else is doing it and they don't want to be left behind in case AI actually turns out to be useful.

     

    Astute comments (about ChatGPT) by one of the great mods on Australia's Whirlpool forums...

     

    It can construct sensible paragraphs because it has lots of training material with sensible-sounding sentences, but there's a difference between sounding articulate and actually being knowledgeable (let alone correct).

     

    AI does not have God-like access to all the written information in the world, and even if it did it doesn't have God-like powers of knowing what is true and what isn't.

    Link to comment
    Share on other sites




    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...