Plus: Why safety data for self-driving technology is misleading, and more
IN BRIEF Facebook and Instagram's parent biz, Meta, was hit with not one, not two, but eight different lawsuits accusing its social media algorithm of causing real harm to young users across the US.
The complaints filed over the last week claim Meta's social media platforms have been designed to be dangerously addictive, driving children and teenagers to view content that increases the risk of eating disorders, suicide, depression, and sleep disorders.
"Social media use among young people should be viewed as a major contributor to the mental health crisis we face in the country," said Andy Birchfield, an attorney representing the Beasley Allen Law Firm, leading the cases, in a statement.
"These applications could have been designed to minimize any potential harm, but instead, a decision was made to aggressively addict adolescents in the name of corporate profits. It's time for this company to acknowledge the growing concerns around the impact of social media on the mental health and well-being of this most vulnerable portion of our society and alter the algorithms and business objectives that have caused so much damage."
The lawsuits have been filed in federal courts in Texas, Tennessee, Colorado, Delaware, Florida, Georgia, Illinois and Missouri, according to Bloomberg.
How safe are autonomous vehicles really?
The safety of self-driving car software like Tesla's Autopilot is difficult to assess, considering there's little data made public and the metrics used for such assessments are misleading.
Companies developing autonomous vehicles typically report the number of miles driven by self-driving technology before human drivers have to take over to prevent errors or crashes. The data, for example, shows fewer accidents occur when Tesla's Autopilot mode is activated. But it doesn't necessarily mean it's safer, experts argue.
Autopilot is more likely to be turned for driving on the highway, where conditions are less complex for software to deal with than getting around a busy city. Tesla and other auto businesses don't share data for driving down specific roads for better comparison.
"We know cars using Autopilot are crashing less often than when Autopilot is not used," Noah Goodall, a researcher at the Virginia Transportation Research Council, told the New York Times. "But are they being driven in the same way, on the same roads, at the same time of day, by the same drivers?."
The National Highway Traffic Safety Administration ordered companies to report serious crashes involving self-driving cars within 24 hours of the accident occurring, last year. But no information has been made public yet.
AI upstart accused of sneakily using human labor behind autonomous technology
Nate, a startup valued at over $300 million which claims to use AI to automatically fill shoppers' payment information on retail websites, actually pays workers to manually enter the data for $1.
Buying stuff on the internet can be tedious. You have to type in your name, address, credit card details if a website hasn't saved the information. Nate was built to help netizens avoid having to do this every time they visited an online store. Described as an AI app, Nate claimed it used automated methods to fill personal data after a consumer placed an order.
But the software was tricky to develop, considering the various combinations of buttons the algorithms need to press and the precautions in place on websites to stop bots and scalpers. To try and attract more consumers to the app, Nate offered folks $50 to spend online at shops like Best Buy and Walmart. But the upstart struggled to get its technology working to fulfil them properly.
The best way to make it? Fake it. Instead, Nate turned to hiring workers in the Philippines to manually enter consumer's private information; orders were sometimes completed hours after they were placed, according to The Information. Some 60 to 100 percent of orders were processed manually, it was alleged. A spokesperson for the upstart said the report was "incorrect and the claims questioning our proprietary technology are completely baseless."
DARPA wants AI to be more trustworthy
US military research arm, DARPA, launched a new program to fund development into hybrid neuro-symbolic AI algorithms in the hopes that the technology will lead to more trustworthy systems.
Modern deep learning is often referred to as a "black box," its inner-workings are opaque and experts often don't understand how neural networks arrive at an output given a specific input. The lack of transparency means the results are difficult to interpret, making it risky to deploy in some scenarios. Some believe incorporating more traditional old-fashioned symbolic reasoning techniques could make models more trustworthy.
"Motivating new thinking and approaches in this space will help assure that autonomous systems will operate safely and perform as intended," said Sandeep Neema, program manager of DARPA's new Assured Neuro Symbolic Learning and Reasoning program. "This will be integral to trust, which is key to the Department of Defense's successful adoption of autonomy."
The initiative will fund research into hybrid architectures that are a mixture of symbolic systems and modern AI. DARPA is particularly interested in applications that are relevant to the military, such as a model that could detect whether entities were friendly, adversarial, or neutral, for example, as well as detecting dangerous or safe areas in combat. ®
- Mutton and Karlston
- 2
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.