Jump to content

Search the Community

Showing results for tags 'ai'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

  1. Samsung is using artificial intelligence to automate the insanely complex and subtle process of designing cutting-edge computer chips. The South Korean giant is one of the first chipmakers to use AI to create its chips. Samsung is using AI features in new software from Synopsys, a leading chip design software firm used by many companies. “What you're seeing here is the first of a real commercial processor design with AI,” says Aart de Geus, the chairman and co-CEO of Synopsys. Others, including Google and Nvidia, have talked about designing chips with AI. But Synopsys’ tool, called DSO.ai, may prove the most far-reaching because Synopsys works with dozens of companies. The tool has the potential to accelerate semiconductor development and unlock novel chip designs, according to industry watchers. Synopsys has another valuable asset for crafting AI-designed chips: years of cutting-edge semiconductor designs that can be used to train an AI algorithm. A spokesperson for Samsung confirms that the company is using Synopsys AI software to design its Exynos chips, which are used in smartphones, including its own branded handsets, as well as other gadgets. Samsung unveiled its newest smartphone, a foldable device called the Galaxy Z Fold3, earlier this week. The company did not confirm whether the AI-designed chips have gone into production yet, or what products they may appear in. Across the industry, AI appears to be changing the way chips are made. A Google research paper published in June described using AI to arrange the components on the Tensor chips that it uses to train and run AI programs in its data centers. Google’s next smartphone, the Pixel 6, will feature a custom chip manufactured by Samsung. A Google spokesperson declined to say whether AI helped design the smartphone chip. Chipmakers including Nvidia and IBM are also dabbling in AI-driven chip design. Other makers of chip-design software, including Cadence, a competitor to Synopsys, are also developing AI tools to aid with mapping out the blueprints for a new chip. Mike Demler, a senior analyst at the Linley Group who tracks chip design software, says artificial intelligence is well suited to arranging billions of transistors across a chip. “It lends itself to these problems that have gotten massively complex,” he says. “It will just become a standard part of the computational tool kit.” Using AI tends to be expensive, Demler says, because it requires a lot of cloud computing power to train a powerful algorithm. But he expects it to become more accessible as the cost of computing drops and models become more efficient. He adds that many tasks involved in chip design cannot be automated, so expert designers are still needed. Modern microprocessors are incredibly complex, featuring multiple components that need to be combined effectively. Sketching out a new chip design normally requires weeks of painstaking effort as well as decades of experience. The best chip designers employ an instinctive understanding of how different decisions will affect each step of the design process. That understanding cannot easily be written into computer code, but some of the same skill can be captured using machine learning. The AI approach used by Synopsys, as well as by Google, Nvidia, and IBM, uses a machine-learning technique called reinforcement learning to work out the design of a chip. Reinforcement learning involves training an algorithm to perform a task through reward or punishment, and it has proven an effective way of capturing subtle and hard-to-codify human judgment. The method can automatically draw up the basics of a design, including the placement of components and how to wire them together, by trying different designs in simulation and learning which ones produce the best results. This can speed the process of designing a chip and allow an engineer to experiment with novel designs more efficiently. In a June blog post, Synopsys said one North American manufacturer of integrated circuits had improved the performance of a chip by 15 percent using the software. Most famously, reinforcement learning was used by DeepMind, a Google subsidiary, in 2016 to develop AlphaGo, a program capable of mastering the board game Go well enough to defeat a world-class Go player. De Geus says his company realized that reinforcement learning could also be useful for chip design. “A bit over a year and a half ago, for the first time, we were able to get the same results as a team of experts would get in multiple months in just a few weeks,” de Geus says. He will present details of the technology and its development at HotChips, a semiconductor technology conference, on August 23. Stelios Diamantidis, senior director of artificial intelligence solutions at Synopsys, says the DSO.ai software can be configured to prioritize different goals, such as performance or energy efficiency. Semiconductors, as well as the tools used to make them, have become increasingly prized assets. The US government has sought to restrict the supply of chipmaking technology to China, a key rival, and some politicians have called for software to be added to the export controls list. The emerging era of AI-designed chips also raises the prospect of simultaneously using AI to customize software to run more efficiently on a chip. This might include the neural network algorithms that run on specialized AI chips and are commonly used in modern AI. “AI-powered codesign of software and hardware is a rapidly growing direction,” says Song Han, a professor at MIT who specializes in AI chip design. “We have seen promising results.” Samsung Has Its Own AI-Designed Chip. Soon, Others Will Too (May require free registration to view)
  2. GitHub and OpenAI launch a new AI tool that generates its own code Microsoft gets a taste of OpenAI’s tech Photo: GitHub GitHub and OpenAI have launched a technical preview of a new AI tool called Copilot, which lives inside the Visual Studio Code editor and autocompletes code snippets. Copilot does more than just parrot back code it’s seen before, according to GitHub. It instead analyzes the code you’ve already written and generates new matching code, including specific functions that were previously called. Examples on the project’s website include automatically writing the code to import tweets, draw a scatterplot, or grab a Goodreads rating. It works best with Python, JavaScript, TypeScript, Ruby, and Go, according to a blog post from GitHub CEO Nat Friedman. GitHub sees this as an evolution of pair programming, where two coders will work on the same project to catch each others’ mistakes and speed up the development process. With Copilot, one of those coders is virtual. This project is the first major result of Microsoft’s $1 billion investment into OpenAI, the research firm now led by Y Combinator president Sam Altman. Since Altman took the reins, OpenAI has pivoted from a nonprofit status to a “capped-profit” model, took on the Microsoft investment, and started licensing its GPT-3 text-generation algorithm. Copilot is built on a new algorithm called OpenAI Codex, which OpenAI CTO Greg Brockman describes as a descendant of GPT-3. GPT-3 is OpenAI’s flagship language-generating algorithm, which can generate text sometimes indistinguishable to human writing. It’s able to write so convincingly because of its sheer size of 175 billion parameters, or adjustable knobs that allow the algorithm to connect relationships between letters, words, phrases, and sentences. While GPT-3 generates English, OpenAI Codex generates code. OpenAI plans to release a version of Codex through its API later this summer so developers can built their own apps with the tech, a representative for OpenAI told The Verge in an email. Codex was trained on terabytes of openly available code pulled from GitHub, as well as English language examples. While testimonials on the site rave about the productivity gains Copilot provides, GitHub implies that not all the code utilized was vetted for bugs, insecure practices, or personal data. The company writes they have put a few filters in place to prevent Copilot from generating offensive language, but it might not be perfect. “Due to the pre-release nature of the underlying technology, GitHub Copilot may sometimes produce undesired outputs, including biased, discriminatory, abusive, or offensive outputs,” Copilot’s website says. Given criticisms of GPT-3’s bias and abusive language patterns, it seems that OpenAI hasn’t found a way to prevent algorithms from inheriting its training data’s worst elements. The company also warns that the model could suggest email addresses, API keys, or phone numbers, but that this is rare and the data has been found to be synthetic or pseudo-randomly generated by the algorithm. However, the code generated by Copilot is largely original. A test performed by GitHub found that only 0.1 percent of generated code could be found verbatim in the training set. This isn’t the first project to try to automatically generate code to help toiling programmers. The startup Kite pitches a very similar functionality, with availability on more than 16 code editors. Right now, Copilot is in a restricted technical preview, but you can sign up on the project’s website for a chance to access it. Source: GitHub and OpenAI launch a new AI tool that generates its own code The Truth about Github Copilot // AI Programming First Look
  3. Google says its AI is much faster and better chip designer than humans Even before Artificial Intelligence (AI) had started taking shape, we humans have feared that it will outsmart and outdo us to take over our jobs, and eventually everything. Well, we may be very close to the realization of those fears now. Today, in its study published in Nature, Google Brain, - which is Google's deep learning and AI research team - has claimed that it has developed a new reinforcement-learning system that can do microprocessor floor-planning designs much faster and better than humans can. With the help of an edge-based graph convolutional neural network architecture, it is being said that AI will be able to design floorplans in only a fraction of the time that humans take. The image below shows two memory macro-block designs. The one on the left was done by humans. The other one has been made by AI in only a few hours, far less than humans, and it has a higher number of macro-blocks too. The floorplan is basically the layout of the various functional blocks inside a processor to produce the most efficient design. Below are two example images of how a floorplan would look like. The one on the left is a simpler one while the other one is a bit more complex with more details. Interestingly, Google will be utilizing this technology to build its own AI-accelerators called Tensor Processing Units (TPUs). These will also be used for all chipmaking in general as it has the potential to save a huge amount of time. Source and image: Nature |CPU floorplan images via ResearchGate (1),(2) Google says its AI is much faster and better chip designer than humans
  4. Yet another Google AI leader has defected to Apple He will work under John Giannandrea, who also left Google for Apple. Enlarge / AI researcher Samy Bengio (left) poses with his brother Yoshua Bengio (right) for a photo tied to a report from cloud-platform company Paperspace on the future of AI. Paperspace Apple has hired Samy Bengio, a prominent AI researcher who previously worked at Google. Bengio will lead "a new AI research unit" within Apple, according to a recent report in Reuters. He is just the latest in a series of prominent AI leaders and workers Apple has hired away from the search giant. Bengio will work directly under John Giannandrea, Apple's senior vice president of machine learning and AI strategy. Giannandrea himself used to lead AI at Google (he worked there for eight years) before jumping ship to work at Apple in 2018. We interviewed him at length last year about the growing role of machine learning in Apple's software and services. Apple uses machine learning to improve the quality of photos taken with the iPhone, surface suggestions of content and apps that users might want to use, power smart search features across its various software offerings, assist in palm rejection for users writing with the iPad's Pencil accessory, and much more. Bengio was part of a cadre of AI professionals who left Google to protest the company's firings of its own AI ethics researchers (Margaret Mitchell and Timnit Gebru) after those researchers raised concerns about diversity and Google's approach to ethical considerations around new applications of AI and machine learning. Bengio voiced his support for Mitchell and Gebru, and he departed of his own volition after they were let go. In his 14 years at Google, Bengio worked on AI applications like speech and image analysis, among other things. Neither Bengio nor Apple has said exactly what he will be researching in his new role in Cupertino. Yet another Google AI leader has defected to Apple
  5. The Foundations of AI Are Riddled With Errors The labels attached to images used to train machine-vision systems are often wrong. That could mean bad decisions by self-driving cars and medical algorithms. Photograph: Julian Stratenschulte/Getty Images The current boom in artificial intelligence can be traced back to 2012 and a breakthrough during a competition built around ImageNet, a set of 14 million labeled images. In the competition, a method called deep learning, which involves feeding examples to a giant simulated neural network, proved dramatically better at identifying objects in images than other approaches. That kick-started interest in using AI to solve different problems. But research revealed this week shows that ImageNet and nine other key AI data sets contain many errors. Researchers at MIT compared how an AI algorithm trained on the data interprets an image with the label that was applied to it. If, for instance, an algorithm decides that an image is 70 percent likely to be a cat but the label says “spoon,” then it’s likely that the image is wrongly labeled and actually shows a cat. To check, where the algorithm and the label disagreed, researchers showed the image to more people. ImageNet and other big data sets are key to how AI systems, including those used in self-driving cars, medical imaging devices, and credit-scoring systems, are built and tested. But they can also be a weak link. The data is typically collected and labeled by low-paid workers, and research is piling up about the problems this method introduces. Algorithms can exhibit bias in recognizing faces, for example, if they are trained on data that is overwhelmingly white and male. Labelers can also introduce biases if, for example, they decide that women shown in medical settings are more likely to be “nurses” while men are more likely to be “doctors.” Recent research has also highlighted how basic errors lurking in the data used to train and test AI models—the predictions produced by an algorithm—may disguise how good or bad those models really are. “What this work is telling the world is that you need to clean the errors out,” says Curtis Northcutt, a PhD student at MIT who led the new work. “Otherwise the models that you think are the best for your real-world business problem could actually be wrong.” Aleksander Madry, a professor at MIT, led another effort to identify problems in image data sets last year and was not involved with the new work. He says it highlights an important problem, although he says the methodology needs to be studied carefully to determine if errors are as prevalent as the new work suggests. Similar big data sets are used to develop algorithms for various industrial uses of AI. Millions of annotated images of road scenes, for example, are fed to algorithms that help autonomous vehicles perceive obstacles on the road. Vast collections of labeled medical records also help algorithms predict a person’s likelihood of developing a particular disease. Such errors might lead machine learning engineers down the wrong path when choosing among different AI models. “They might actually choose the model that has worse performance in the real world,” Northcutt says. Northcutt points to the algorithms used to identify objects on the road in front of self-driving cars as an example of a critical system that might not perform as well as its developers think. It is hardly surprising that AI data sets contain errors, given that annotations and labels are typically applied by low-paid crowd workers. This is something of an open secret in AI research, but few researchers have tried to pinpoint the frequency of such errors. Nor has the effect on the performance of different AI models been shown. The MIT researchers examined the ImageNet test data set—the subset of images used to test a trained algorithm—and found incorrect labels on 6 percent of the images. They found a similar proportion of errors in data sets used to train AI programs to gauge how positive or negative movie reviews are, how many stars a product review will receive, or what a video shows, among others. These AI data sets have been used to train algorithms and measure progress in areas including computer vision and natural language understanding. The work shows that the presence of these errors in the test data set makes it difficult to gauge how good one algorithm is compared with another. For instance, an algorithm designed to spot pedestrians might perform worse when incorrect labels are removed. That might not seem like much, but it could have big consequences for the performance of an autonomous vehicle. After a period of intense hype following the 2012 ImageNet breakthrough, it has become increasingly clear that modern AI algorithms may suffer from problems as a result of the data they are fed. Some say the whole concept of data labeling is problematic too. “At the heart of supervised learning, especially in vision, lies this fuzzy idea of a label,” says Vinay Prabhu, a machine learning researcher who works for the company UnifyID. Last June, Prabhu and Abeba Birhane, a PhD student at University College Dublin, combed through ImageNet and found errors, abusive language, and personally identifying information. Prabhu points out that labels often cannot fully describe an image that contains multiple objects, for example. He also says it is problematic if labelers can add judgments about a person’s profession, nationality, or character, as was the case with ImageNet. The Foundations of AI Are Riddled With Errors (may require free registration)
  6. This article will introduce two interesting websites. The first is a technique that uses AI to turn simple strokes into colorful images, like the one shown below. Were you surprised?Yes, it's just that powerful.A number of different sample graphics have been added to the site, as shown above.There is an edit option under each of the sample graphics (I missed this option when I shot it) that we can edit ourselves to get our own images.We can even try coloring in some shapes!However, it takes a long time for the world to download the output image after we've edited it.The Website builders have also published some code for the project for anyone interested to take a look. Here is the URL:https://affinelayer.com/pixsrv/ ---------------------------------------------------------------------------------------------------------------------------- Next is the second website I want to introduce about compresses the size of the image. We know that the files provided by the Forum are very small, so compresses the size of the image is important.This website can help compress the picture, but you need to register. You can try it. Here is the URL:https://optidash.ai/ I wish I could help you.If you like this article please click "Like".Reputation value is really important to me.
  7. ENISA Highlights AI Security Risks for Autonomous Cars Automakers Should Employ Security-By-Design to Thwart Cyber Risks Localization of vehicle sensors and their main uses. Source: ENISA Autonomous vehicle manufacturers are advised to adopt security-by-design models to mitigate cybersecurity risks, as artificial intelligence is susceptible to evasion and poisoning attacks, says a new ENISA report. The study from the European Union Agency for Cybersecurity and the European Commission's Joint Research Centre warns that autonomous cars are susceptible to both unintentional harm, caused by existing vulnerabilities in the hardware and software components of the cars, and intentional misuse, where attacks can introduce new vulnerabilities for further compromise. Artificial intelligence models are also described as susceptible to evasion and poisoning attacks where attackers manipulate what is fed into the AI systems to alter the outcomes. As a result, the report notes autonomous cars are vulnerable to potential distributed denial-of-service attacks and various other threats associated with their many sensors, controls and connection mechanisms. "The growing use of AI to automate decision-making in a diversity of sectors exposes digital systems to cyberattacks that can take advantage of the flaws and vulnerabilities of AI and ML methods," the report notes. "Since AI systems tend to be involved in high-stake decisions, successful cyberattacks against them can have serious impacts. AI can also act as an enabler for cybercriminals." Perceived Threats Likely threats identified by the report include: Sensor jamming: Attackers can blind or jam sensors used in autonomous cars by altering the AI algorithms after gaining access to its systems by leveraging exploits. This way attackers can feed the algorithms with wrong data to diminish the effect of automated decision making. DDoS attacks: Hackers can disrupt the communication channels available to the vehicle to hinder operations needed for autonomous driving. Exposed data: Due to the abundance of information stored and utilized by vehicles for the purpose of autonomous driving, attackers can leverage vulnerabilities to access and expose user data. Recommendations The report provides several recommendations for automakers to implement to avoid attacks against autonomous cars; these include: Systematic security validation of AI models and data: Since autonomous cars collect large amounts of data such as input from multiple sensors, ENISA recommends that automakers should systematically monitor and conduct risk assessment processes for the AI models and their AI algorithms. Address supply chain challenges related to AI cybersecurity: Automakers should ensure compliance with AI security regulations across their supply chain by involving and sharing responsibility between stakeholders as diverse as developers, manufacturers, providers, vendors, aftermarket support operators, end users, or third-party providers of online services. They should be aware of the difficulty of tracing open source assets, with pre-trained models available online and widely used in ML systems, without guarantee of their origin. It is advised to use secure embedded components to perform the most critical AI functions. Develop incident handling and response plans: A clear and established cybersecurity incident handling and response plan should be considered, taking into account the increased number of digital components in the vehicle and, in particular, the ones based on AI. Automakers are advised to develop simulated attacks and establish mandatory standards for AI security incidents reporting. They should organize disaster drills, involving high management, so that they understand the potential impact in case a vulnerability is discovered. Build AI cybersecurity knowledge among developers and system designers: Shortage of specialist skills hampers the integration of security in the automotive sector, so it is recommended that AI cybersecurity be integrated into the whole organization policy. Also, diverse teams should be created consisting of experts from ML-related fields, cybersecurity and the automotive sector, including mentors to assist the adoption of AI security practices. For the longer term, bring industry expertise to the academic curriculum by welcoming lead people in the field to guest lectures or by defining special courses that tackle this topic. Past Attacks The extent of the challenge for the automotive sector to implement AI security by design was laid bare by the recall during February of nearly 1.3 million Mercedes-Benz cars in the U.S due to a problem in their emergency communication module software. "They had to recall cars manufactured since 2016, which means that there was no proper testing plan for this feature for almost 5 years," Wissam Al Adany, CIO of automotive company Ghabbourauto in Egypt, told Information Security Media Group. While attacks targeting physical autonomous cars remain relatively few, security researchers have successfully compromised various models of Tesla vehicles. In November 2020, researchers from Belgium's University of Leuven - aka KU Leuven - found that they could clone a Tesla Model X driver's wireless key fob, and about two minutes later drive away with the car. A demonstration video posted by the researchers also suggests such an attack could be stealthy, potentially leaving a stolen car’s owner unaware of what was happening (see: Gone in 120 Seconds: Flaws Enable Theft of Tesla Model X). In October 2020, researchers from Israel’s Ben-Gurion University of the Negev demonstrated how some autopilot systems from Tesla can be tricked into reacting after seeing split-second images or projections (see: Tesla's Autopilot Tricked by Split-Second 'Phantom' Images). In yet another case, an independent security researcher uncovered a cross-site scripting vulnerability in Tesla 3 that could enable attackers to access the model's vitals endpoint that contained information about other car (see: How a Big Rock Revealed a Tesla XSS Vulnerability). Source: ENISA Highlights AI Security Risks for Autonomous Cars
  8. Researchers are working on a chess-playing AI that emulates human-level skill Maia doesn't try to make the perfect moves In brief: AI has been kicking our collective butt in just about every classic board game imaginable for many years now. That's no surprise, though -- when you tell an AI to learn from the best with no checks or balances, that's precisely what it will do. Now, though, researchers are looking for a way to handicap Chess-playing AI and teach a new model to make more human-like decisions. This is certainly a novel concept: again, most chess and board game-playing AIs seek to beat out the best of the best. Indeed, in some cases, AI players have been so good that they've driven some pros out of the gaming community entirely. Maia, on the other hand, is a new chess engine that seeks to emulate, not surpass, human-level chess performance. As researchers point out, this could lead to a more "enjoyable chess-playing experience" for any humans an AI is matched up against, while also allowing those players to learn and improve their skills. "Current chess AIs don't have any conception of what mistakes people typically make at a particular ability level," University of Toronto researcher Ashton Anderson explains. "They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can't separate out what you should work on." For a novice or medium-tier player, it can be difficult to determine your pain points if you're getting crushed by your opponent. However, when the challenge is fair and the playing field is level, it's easier to find those small spots where you could've done better. "Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn't, because they are still too difficult," Anderson continues. So far, Maia has been able to match human moves more than 50 percent of the time. That's not a great number yet, but it's a start. Maia was introduced to lichess.org, a free online chess service, a few weeks ago. In its first week of availability, the model played a whopping 40,000 games, but that number has risen to 116,370 games now. Breaking that figure down, the bot has won 66,000 games, drawn 9,000, and lost 40,000. Before its lichess debut, the model was trained on 9 sets of 500,000 "positions" in real human chess games. It's allegedly possible to play against the bot, though I cannot figure out how to do so, since its profile doesn't appear to have a "challenge" button of any kind. However, since "maia1" appears to be constantly playing at least 20 games at any given time, you can spectate whenever you like. Source: Researchers are working on a chess-playing AI that emulates human-level skill
  9. Walmart Employees Are Out to Show Its Anti-Theft AI Doesn't Work The retailer denies there is any widespread issue with the software, but a group expressed frustration—and public health concerns. Illustration: Sam Whitney; Getty Images In January, my coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the “Concerned Home Office Associates.” (Walmart’s headquarters in Bentonville, Arkansas, is often referred to as the Home Office.) While it’s not unusual for journalists to receive anonymous tips, they don’t usually come with their own slickly produced videos. The employees said they were “past their breaking point” with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks. But the workers claimed it misidentified innocuous behavior as theft, and often failed to stop actual instances of stealing. They told WIRED they were dismayed that their employer—one of the largest retailers in the world—was relying on AI they believed was flawed. One worker said that the technology was sometimes even referred to internally as “NeverSeen” because of its frequent mistakes. WIRED granted the employees anonymity because they are not authorized to speak to the press. The workers said they had been upset about Walmart’s use of Everseen for years, and claimed colleagues had raised concerns about the technology to managers, but were rebuked. They decided to speak to the press, they said, after a June 2019 Business Insider article reported Walmart’s partnership with Everseen publicly for the first time. The story described how Everseen uses AI to analyze footage from surveillance cameras installed in the ceiling, and can detect issues in real time, such as when a customer places an item in their bag without scanning it. When the system spots something, it automatically alerts store associates. “Everseen overcomes human limitations. By using state-of-the-art artificial intelligence, computer vision systems, and big data we can detect abnormal activity and other threats,” a promotional video referenced in the story explains. “Our digital eye has perfect vision and it never needs a day off.” In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseen’s technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reese’s White Peanut Butter Cups. Because they’re stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue. The same person then grabs two gallons of milk by their handles, and moves them across the scanner with one hand. Only one is rung up, but both are put in the bagging area. They then put their own cell phone on top of the machine, and an alert pops up saying they need to wait for assistance—a false positive. “Everseen finally alerts! But does so mistakenly. Oops again,” a caption reads. The filmmaker repeats the same process at two more stores, where they fail to scan a heart-shaped Valentine’s Day chocolate box with a puppy on the front and a Philips Sonicare electric toothbrush. At the end, a caption explains that Everseen failed to stop more than $100 of would-be theft. False Positives The video isn’t definitive proof that Everseen’s technology doesn’t work as well as advertised, but its existence speaks to the level of frustration felt by the group of anonymous Walmart employees, and the lengths they went to prove their objections had merit. In interviews, the workers, whose jobs include knowledge of Walmart’s loss prevention programs, said their top concern with Everseen was false positives at self-checkout. The employees believe that the tech frequently misinterprets innocent behavior as potential shoplifting, which frustrates customers and store associates, and leads to longer lines. “It’s like a noisy tech, a fake AI that just pretends to safeguard,” said one worker. The coronavirus pandemic has given their concerns more urgency. One Concerned Home Office Associate said they worry false positives could be causing Walmart workers to break social distancing guidelines unnecessarily. When Everseen flags an issue, a store associate needs to intervene and determine whether shoplifting or another problem is taking place. In an internal communication from April obtained by WIRED, a corporate Walmart manager expressed strong concern that workers were being put at risk by the additional contact necessitated by false positives, and asked whether the Everseen system should be turned off to protect customers and workers. Before Covid-19, “it wasn’t ideal, it was a poor customer experience,” the worker said. “AI is now creating a public health risk.” (HuffPost reported last week that corporate Walmart employees were concerned about Everseen’s technology putting store associates at risk amid the pandemic.) When Covid-19 reached the US, Americans rushed to stock up on food and household essentials at Walmart, and sales soared. Workers soon began falling sick; at least 20 Walmart associates have now died after contracting the coronavirus, according to United For Respect, a nonprofit that advocates for retail workers and that is crowdsourcing Covid-19 infection rates and working conditions at Walmart stores across the country. Last month, United For Respect said hundreds of Walmart employees participated in a national strike demanding safer working conditions and better benefits. A spokesperson for Walmart said the company has been working diligently to protect customers and its workforce, and believes the rate at which associates have contracted Covid-19 is lower than that of the general US population. They denied that false positives caused by Everseen were a widespread issue, and said the company had not considered turning the system off due to concerns about Covid-19. “We assess our technology regularly, and as evident with the large scale implementation of Everseen across the chain, we have confidence it is currently meeting our standards,” the spokesperson said in an email. Just prior to the start of the pandemic, Walmart said it made significant improvements to its Everseen system, which resulted in fewer alerts overall. The spokesperson declined to answer questions about what the updates may have entailed. The spokesperson also noted that there are a number of different reasons an associate might intervene during a self-checkout transaction, like when a customer has problems with their credit card. The company said it has taken a number of steps to ensure people are protected during these interactions, including regularly cleaning self-checkout kiosks and providing employees with protective equipment. In addition, workers are given handheld devices that allow them to handle most interventions from a distance, the company said. Everseen declined to answer questions about its technology. In a statement, a spokesperson said the company “accurately and effectively identifies potential theft [sic] is why retailers have successfully deployed it at thousands of locations to date, with many more installations planned.” They added that Everseen typically accounts only for less than 10 percent of total interventions at self-service checkouts. In a separate statement, the spokesperson said “Everseen is committed to helping its customers deliver the best possible experience for shoppers and store associates, especially during the COVID-19 pandemic. Self-checkout offers the benefits of a generally contactless shopping experience, allowing for proper social distancing and avoiding manned-lanes in busy stores with limited staff available.” But the Concerned Home Office Associates said their worries about Everseen long predate the pandemic. Emails obtained by WIRED show that other corporate employees raised issues about the technology failing to prevent theft in both 2017 and 2018. The employees said they were particularly vexed by Walmart’s continued investment in Everseen because NCR Corporation, which makes the majority of Walmart’s registers, had acquired an Everseen competitor called StopLift. They considered the acquisition an endorsement, and were confused as to why StopLift’s technology wasn’t being further explored. What’s more, the workers said an internal Walmart research and development group, the Intelligent Retail Lab (IRL), created its own anti-theft software they believed was more accurate than Everseen’s, according to information they were given internally. One Walmart employee said the technology, the existence of which was previously reported by The Wall Street Journal, is now being tested in roughly 50 stores. Walmart declined to answer questions about its internal anti-theft software, but did not dispute WIRED’s reporting. “At an enterprise level, there are a number of tests happening at any given time across our footprint of nearly 5,000 stores,” a spokesperson said in a statement. “The goal of IRL is to build AI capabilities that can be transferred to additional stores. We regularly test capabilities built internally in a small number of stores.” Rising Shrink Everseen’s technology was designed in part to help solve a persistent problem with self-checkout. While allowing customers to scan and pay for their own items cuts down on labor costs for retailers, it has also led to more inventory loss, or “shrinkage,” due to shoplifting, employee theft, and other problems. “Theft through self-checkout lanes is exponentially higher than through traditional checkout lanes,” says Christopher Andrews, a sociology professor at Drew University and the author of The Overworked Consumer: Self-Checkouts, Supermarkets, and the Do-It-Yourself Economy. In the past, Walmart and other retailers relied on weight sensors to prevent shoplifting through self-checkout, but those were prone to error and frustrated customers. Some stores are now turning instead to firms like Everseen, which promise to reduce shrink and increase customer satisfaction by relying instead on surveillance cameras and machine vision. Everseen has said that it works with a number of major retailers. Amazon uses similar technology in its Amazon Go convenience stores, where a network of cameras automatically log the products customers take. (Amazon is now licensing its “Just Walk Out” tech to other companies.) During the coronavirus pandemic and its aftermath, self-checkout may become even more important for stores, as customers look for low-risk ways to shop. NCR corporation said it’s now helping retailers modify its equipment to be as touchless as possible: for example, by reconfiguring machines so that customers can insert a debit or credit card without needing to press the “credit card” payment option. “It is fascinating to see self-checkout become poised as a public health strategy, in addition to things like cashless payment,” says Alexandra Mateescu, a researcher at the nonprofit institute Data & Society, who has written about the effects of new technology on retail workers. “Self-checkout is just one of the ways that we’ve offered customers solutions to get the items they need safely during this time, in addition to other options like delivery, pickup, touchless payment at the register and shopping online,” the Walmart spokesperson said in a statement. “Customers are using this option now, as much as ever, and we will continue to work hard to ensure the in-store experience for our customers is safe, affordable and convenient, as well as safe for our associates.” Walmart Employees Are Out to Show Its Anti-Theft AI Doesn't Work
  10. Microsoft lays off journalists to replace them with AI Microsoft News and MSN is increasingly being run by robots Illustration by Alex Castro / The Verge Microsoft is laying off dozens of journalists and editorial workers at its Microsoft News and MSN organizations. The layoffs are part of a bigger push by Microsoft to rely on artificial intelligence to pick news and content that’s presented on MSN.com, inside Microsoft’s Edge browser, and in the company’s various Microsoft News apps. Many of the affected workers are part of Microsoft’s SANE (search, ads, News, Edge) division, and are contracted as human editors to help pick stories. “Like all companies, we evaluate our business on a regular basis,” says a Microsoft spokesperson in a statement. “This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic.” While Microsoft says the layoffs aren’t directly related to the ongoing coronavirus pandemic, media businesses across the world have been hit hard by advertising revenues plummeting across TV, newspapers, online, and more. Business Insider first reported the layoffs on Friday, and says that around 50 jobs are affected in the US. The Microsoft News job losses are also affecting international teams, and The Guardian reports that around 27 are being let go in the UK after Microsoft decided to stop employing humans to curate articles on its homepages. Microsoft has been in the news business for more than 25 years, after launching MSN all the way back in 1995. At the launch of Microsoft News nearly two years ago, Microsoft revealed it had “more than 800 editors working from 50 locations around the world.” Microsoft has gradually been moving towards AI for its Microsoft News work in recent months, and has been encouraging publishers and journalists to make use of AI, too. Microsoft has been using AI to scan for content and then process and filter it and even suggest photos for human editors to pair it with. Microsoft had been using human editors to curate top stories from a variety of sources to display on Microsoft News, MSN, and Microsoft Edge. Microsoft lays off journalists to replace them with AI
  11. Google is bringing back human moderators to oversee YouTube content, taking over from automated systems that were given more responsibilities at the onset of the COVID-19 pandemic. YouTube revealed in late August that in the three months prior, 11.4 million videos have been removed from the platform for violating its Community Guidelines. This is the highest number of videos taken down from YouTube over a three-month period since the service was launched in 2005, and it was attributed to the higher reliance on A.I. as the pandemic prevented human reviewers from going to work. YouTube admitted, however, that some of the videos would have been erroneously removed. “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in a slightly higher number of videos coming down,” YouTube’s chief product officer told the Financial Times. The Google-owned company revealed that it has reversed the decision to take down 160,000 videos, the Financial Times reported. Normally, less than 25% of appeals are successful, but under A.I. moderation, the percentage of successful appeals has increased to 50%. However, while Mohan claims that more human moderators will start overseeing YouTube content, it remains unclear how that will happen amid the ongoing pandemic. Digital Trends has reached out to Google for additional details on their working arrangements, and we will update this article as soon as we hear back. Source
  12. This tiny AI chip obliterates Google rival in a crucial metric Kneron’s KL720 outperforms Google’s Coral TPU by two times at half the power (Image credit: Kneron) Kneron, a startup from California, has introduced its new neural processing unit (NPU) for edge AI applications. The KL720 NPU promises to enable sophisticated AI-enhanced applications that do not use the cloud, so the chip has more compute horsepower onboard than competitors and consumes less power. Today, there are numerous cloud-enabled AI platforms, such as Apple’s Siri and Amazon’s Alexa, that send data to data centers for processing and interpreting. Sending data to the cloud takes time and energy - not to mention privacy and security concerns - but processing and interpreting data locally is not always possible due to performance and power limitations. Kneron’s KL720 promises to offer enough performance at the edge of the network to enable local processing and interpreting. AI on the edge The Kneron KL720 is a tiny SoC that packs a 700MHz NPU, accompanied by a 500MHz Cadence DSP that acts like a co-processor and an Arm Cortex-M4 400MHz core for system control. The SoC boasts 128MB of LPDDR4 RAM and supports interfaces including MIPI-CSI-2, USB, SPI, SDIO, and UART among others. The chip will come in a 9x9mm (companion mode) or 11x11mm (host mode) form factor, depending on the targeted application. (Image credit: Kneron) Kneron’s KL720 can process up to 4K resolution images and up to 2K resolution videos - and the chip also supports full natural language processing. Kneron says that the KL720 AI offers performance of around 1.5TOPS at 1.2W (average power consumption). By contrast, Google’s Coral edge AI solution offers 4TOPS at 2W. Meanwhile, Kneron claims the KL720 is four times more efficient than Google’s Coral edge by the MobileNetV2 benchmark, possibly because of the company’s software excellence. The company also says a DJI drone powered by Intel’s Movidius chips would double its battery life if switched to the KL720. “KL720 combines power with unmatched energy-efficiency and Kneron’s industry-leading AI algorithms to enable a new era for smart devices,” said Albert Liu, founder and CEO of Kneron. “Its low cost enables even more devices to take advantage of the benefits of edge AI, protecting user privacy, to an extent competitors can't match. Combined with our existing KL520, we are proud to offer the most comprehensive suite of AI chips and software for devices on the market.” Kneron’s KL720 will start sampling soon, so expect it to power actual devices within the coming year. This tiny AI chip obliterates Google rival in a crucial metric
  13. Nvidia will build the 'world's fastest AI supercomputer' Cineca and Nvidia team up for a European Exascale supercomputer for AI (Image credit: Atos) Nvidia and Cineca, an Italian inter-university consortium and major supercomputing center, have announced plans to build ‘the world’s fastest AI supercomputer.’ The upcoming Leonardo system will use nearly 14,000 Nvidia A100 GPUs for a variety of high-performance computing tasks. The peak performance of the system is expected to hit 10 FP16 ExaFLOPS. The supercomputer will be based on Atos’ BullSequana XH2000 supercomputer nodes, each carrying one unknown Intel Xeon processor, four Nvidia A100 GPUs and a Mellanox HDR 200Gb/s InfiniBand card for connectivity. The blades are water cooled and there are 32 of them in each HPC cabinet. (Image credit: Atos) The BullSequana XH2000 architecture is very flexible, so it can house any CPU and GPU and, to that end, we can only guess which Intel Xeon processor will be used for Leonardo. (Image credit: Atos) Scientists from Italian universities plan to use Leonardo for drug discovery, space exploration and research, and weather modelling. Traditionally, such applications rely on high-performance simulation and data analytics workloads that require FP64 precision. But Nvidia says that today many HPC tasks rely on powerful artificial intelligence and machine learning - and for such workloads FP16 precision is enough. Quite naturally, a massive number of GPUs can also perform high-resolution visualizations. Nvidia’s A100 GPU was designed primarily for computing, so it supports all kinds of precision, including ‘supercomputing’ FP64 and ‘AI’ FP16. 14,000 Nvidia A100 GPUs can achieve up to 8.736 FP16 ExaFLOPS (624 TFLOPS per GPU with structural sparsity enabled × 14,000) performance. Meanwhile, the same number of GPUs can provide 135,800 FP64 TFLOPS, which is slightly below Summit’s 148,600 FP64 TFLOPS. Nvidia believes AI and ML are crucial for today’s supercomputer, so the company prefers to quote peak FP16 performance with structural sparsity enabled, in the case of the Leonardo supercomputer powered by its A100 GPUs. “With the advent of AI, we now have a new metric for measuring supercomputers. As a result, the performance of our supercomputers has exploded as the computational power of them has increased exponentially with the introduction of AI," Ian Buck, VP and GM of Accelerated Computing at Nvidia, told TechRadar Pro. "Today’s modern supercomputers must be AI supercomputers in order to be an essential tool for science. Nvidia is setting a new trend by combining HPC and AI. Only AI supercomputers can deliver 10 ExaFLOPS of AI performance featuring nearly 14,000 NVIDIA Ampere architecture-based GPUs.” Sources: Nvidia press release, Nvidia blog post Nvidia will build the 'world's fastest AI supercomputer'
  14. These Robots Use AI to Learn How to Clean Your House At Toyota, researchers are experimenting with prototypes that swoop from the ceiling to take care of chores with the help of machine learning. A Toyota robot suspended from the ceiling gets to work wiping a counter.Photograph: Toyota Research Institute Inside an ordinary-looking home, a robot suspended from the ceiling slowly expands arms holding a sponge, before carefully wiping a kitchen surface clean. Nearby, another robot gently cleans a flat-screen television, causing it to wobble slightly. The cleaning robots live inside a mock home located at the Toyota Research Institute in Los Altos, California. The institute’s researchers are testing a range of robot technologies designed to help finally realize the dream of a home robot. After looking at homes in Japan, which were often small and cluttered, the researchers realized they needed a creative solution. “We thought, you know, how can we use the ceiling?” says Max Bajracharya, VP of Robotics at TRI. In another Toyota lab, in Cambridge, Massachusetts, robots practice picking up cups and plates and loading the dishwasher. In order to operate reliably, and without damaging anything, the robots rely on artificial intelligence algorithms that learn how to grasp objects correctly. The robot also uses a soft gripper equipped with a sense of touch that TRI developed. Toyota does not have a timeline for commercializing its prototypes, but it is looking to be an early entrant in a potentially big market. The Japanese carmaker launched the Toyota Research Institute with a $1 billion investment in 2015, betting that its expertise in car manufacturing could be parlayed into more advanced robotics for homes and workplaces. Gill Pratt, the institute’s CEO, says the sensing, computing, and actuation technology found in modern cars essentially makes them simple robots. “A car is just a robot that operates outside on roads,” he says. “The purpose of a car is to amplify a person, and we think of robots in the same way—as amplifiers of human activity.” There is hope, especially in Japan, that robots will help care for an aging population in the coming decades. Besides keeping homes clean, robots might tend to people who need physical help, and perhaps even keep them company. There is also an expectation that recent progress in AI will accelerate in advances in robotics. Today, most industrial robots are still fairly dumb, blindly executing repetitive motions. But this is starting to change, with new sensing and planning capabilities, as well as some use of machine learning. The robotics industry is already worth hundreds of billions of dollars; significant advances could multiply the figures in coming decades. A growing number of startups hope to commercialize AI-powered robots for simple, repetitive tasks in fulfillment centers and retail stores. Big companies such as Google and Amazon are also investing in research that melds AI and robotics. Having a robot perform useful tasks inside an ordinary home is, however, still a huge challenge because it involves adapting to complex and unpredictable environments. The ability to manipulate any everyday object remains an unsolved problem in robotics despite recent advances. It isn’t clear how welcome robots will be in homes, as TRI’s researchers admit. But besides testing new hardware, TRI is conducting research on human-robot interaction. And earlier this year, it said it would build a simulated city near Mount Fuji in Japan, to test robotics and other technologies. As TRI’s projects show, creating robots that could be of help around the home depends on AI as well as computer simulation and the cloud. To teach robots how to perform a task, TRI researchers create a virtual reality environment where a person demonstrates for the robot by manipulating its arms. After many tries, the robot uses machine learning to determine the best movements. TRI’s robots also practice tasks like loading the dishwasher inside a simulated environment, which gives them many more attempts from which to learn. “We're starting to make some progress on how we [get the robots to] actually learn the behaviors and then share that behavior across, say, a fleet of robots,” says Bajracharya. These Robots Use AI to Learn How to Clean Your House
  15. The social network teams up with Carnegie Mellon University to find new ways to store and use renewable energy. Facebook has researchers who are trying to advance artificial intelligence through various projects. Facebook said Wednesday it will use artificial intelligence to help scientists find new ways to store and use renewable energy as part of project that could help combat climate change. The social network is partnering with Carnegie Mellon University to spur the use of solar and wind energy, which are cleaner than burning fossil fuels that contribute to air pollution and global warming. Climate change continues to be a big concern this year in the wake of heat waves and forest fires on the West Coast. One of the challenges to using renewable energy is that it needs to be stored, sometimes for weeks or months, because the wind doesn't always blow and the sun doesn't shine all the time. Batteries can be used to store excess renewable energy, but Facebook said that using that method is expensive especially if companies need to use a large power grid. The social network's new effort, known as the Open Catalyst Project, will focus on another solution that involves converting excess solar and wind energy into other fuels such as hydrogen or ethanol. The project's goal is to find low-cost catalysts to speed up these chemical reactions, allowing renewable energy to be stored for a cheaper price. Discovering new catalysts can be tough because there are many different possible combinations of elements. Facebook says AI will help predict atomic interactions faster than simulations used in labs. With the help of AI, the process could take seconds rather than days. If the project works, the research could fuel the use of renewable energy because it would cost less to store the energy for future use. The demand for renewable energy is rising. In most countries, generating electricity by using the sun's energy is consistently cheaper than burning coal or gas in power plants, according to the International Renewable Energy Agency. "As energy needs continue to climb and the fight against climate change grows more urgent, this problem offers a chance to advance AI in a way that will have a significant real-world impact," Facebook's Research Scientist Larry Zitnick said in a blog post. Source
  16. If artificial intelligence will destroy humanity, it probably won’t be through killer robots and the incarnation—it will be through a thousand paper cuts. In the shadow of the immense benefits of advances in technology, the dark effects of AI algorithms are slowly creeping into different aspects of our lives, causing divide, unintentionally marginalizing groups of people, stealing our attention, and widening the gap between the wealthy and the poor. While we’re already seeing and discussing many of the negative aspects of AI, not enough is being done to address them. And the reason is that we’re looking in the wrong place, as futurist and Amy Webb discusses in her book The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Many are quick to blame large tech companies for the problems caused by artificial intelligence. They’re not wrong. A few very wealthy organizations wield enormous power on how AI systems are being developed and deployed across thousands of applications and delivered to billions of devices and users. And by extension, they are responsible for many of the problems we are facing, from algorithmic bias and social media filter bubbles to privacy issues and lack of diversity. These companies, however, are not inherently evil and not alone to blame for the broken state of AI, Webb argues in The Big Nine. The problems run much deeper in the underlying systems that push these companies to work as they do. And if we don’t fix the problems at the root, the consequences can be disastrous. In The Big Nine, Webb provides a comprehensive layout of the current problems of the AI industry, an outlook of what can happen in the future, and a roadmap for setting the industry on the right path. G-MAFIA vs BAT: The overlords of artificial intelligence The Big Nine is a reference to nine big tech companies who have the lion’s share of what is happening in artificial intelligence. Six of them are in the United States: Google, Facebook, Microsoft, Apple, Amazon, and IBM. Webb collectively calls them the G-MAFIA and describes them as “a closed supernetwork of people with similar interests and backgrounds working within one field who have a controlling influence over our futures.” The three remaining companies are Chinese tech giants Baidu, Alibaba, and Tencent, who are already well-known as BAT. “I firmly believe that the leaders of these nine companies are driven by a profound sense of altruism and a desire to serve the greater good: they clearly see the potential of AI to improve health care and longevity, to solve our impending climate issues, and to lift millions of people out of poverty,” Webb writes. But the problem is that the Big Nine are being pushed by external forces—often inconspicuously—that are pressuring them to work in ways that are against their best intentions. The cultural problems of AI companies “The future of AI is being built by a relatively few like-minded people within small, insulated groups,” Webb writes. “[As] with all insulated groups that work closely together, their unconscious biases and myopia tend to become new systems of belief and accepted behaviors over time.” And this like-mindedness starts in the universities where big tech companies recruit their talent, and where the pioneers of AI hailed from. In U.S. universities, computer science programs are mostly focused on hard engineering skills, programming, systems engineering, math. When it comes to AI, students focus on machine learning algorithms, natural language processing, computer vision, and other technical skills. There’s little room for anthropology, philosophy, and ethics. Those topics are often overlooked or included as optional. Thirty years ago, when algorithms were still not too dominant in our lives, this would not be much of a problem. But today, AI is slowly but surely finding its way into in sensitive areas such as processing loan applications and making hiring decisions. And in these situations, the algorithms reflect the unconscious biases, preferences, and blind spots of the people who are creating them. The people who develop AI at big tech companies usually come from similar social backgrounds, demographics, ethnicities, and religions. Consequently, their products often disadvantage or leave out certain groups of people. This is why we regularly see AI scandals such as an Amazon hiring algorithm that discriminates against women, an IBM face detection system that performs poorly on non-white female faces, a Google algorithm that mislabels images of black people, and an Apple algorithm that disadvantages women in credit scoring. Fortunately, all these events were followed by quick apologies and fixes issued by the respective companies. Unfortunately, most of them were found when someone stumbled on them by chance. What we don’t know is the many other hidden ways AI algorithms are discriminating against people without them knowing it. They are paper cuts, causing small disadvantages that might go unnoticed to the individual, but can have massive effects at scale. And when the people who are creating the AI systems are blind to their own biases, they surely won’t know where to look for problems. Why don’t universities fix their programs? Because technology is moving faster than academia. “A single, required ethics course—specifically built for and tailored to students studying AI—won’t do the trick if the material isn’t current and especially if what’s being taught doesn’t reverberate throughout other areas of the curriculum,” Webb writes. And universities can’t press pause to rethink and restructure their courses. “Universities want to show a strong record of employed graduates, and employers want graduates with hard skills. The Big Nine are partners with these universities, which rely on their funding and resources,” Webb writes in The Big Nine. But why don’t tech companies change their norms and criteria? The profit-driven AI market Throughout its history, AI research has gone through a series of summers and winters, periods of hype and excitement (and a lot of money thrown at AI research) followed by disillusionment and drying out of funding when the technologies failed to deliver on their promises. The success of deep neural networks in recent years have rejuvenated interest in the field of AI. But research on neural networks is extremely expensive and requires vast amounts of data and compute resources. The mounting costs of deep learning research have pushed AI scientists and research labs into the arms of large tech companies. The deep pockets of tech companies allow scientists to continue their research. But these companies are also driven by market forces and expect return on their investment. “There is tremendous pressure for the G-MAFIA to build practical and commercial applications for AI as quickly as possible. In the digital space, investors have grown accustomed to quick wins and windfalls,” Webb writes in The Big Nine. The direct result of this drive is the premature and hasty release of “AI-powered” products to the market, which means developer don’t have time to weigh in on the negative ramifications. But the less noticed consequence is the commercialization of AI research. Scientific research labs are required to direct at least part of their resources to create profitable products so that they can keep their investors happy and secure the next round of funding. We’ve already seen this happen with the UK-based DeepMind, acquired by Google in 2014, and the San Francisco–based OpenAI, which is receiving its funding from Microsoft. DeepMind created an “applied” division which is working on commercial AI products. OpenAI has pledged to license its “pre-AGI” technologies to its investors, which only includes Microsoft for the moment. Why aren’t tech companies and their use of AI regulated? “In the United States, the G-MAFIA wield significant power and influence over government in part because of America’s market economy system and because we have a strong cultural aversion toward strong government control of business,” Webb writes. But the situation is growing increasing dangerous as AI and the technology created by the G-MAFIA continue to permeate every aspect of our lives. Per Webb: “Sometime in the next decade, the rest of the AI ecosystem will converge around just a few G-MAFIA systems. All the startups and players on the periphery—not to mention you and me—will have to accept a new order and pledge our allegiance to just a few commercial providers who now act as the operating systems for everyday life. Once your data, gadgets, appliances, cars, and services are entangled, you’ll be locked in. As you buy more stuff—like mobile phones, connected refrigerators, or smart earbuds—you’ll find that the G-MAFIA has become an operating system for your everyday life. Humanity is being made an offer that we just can’t refuse.” The AI-powered surveillance machine n China, where the state is using every tool at its disposal—including AI—to consolidate its power, the situation is very different but no less dangerous. The Chinese government well understands the implications and potential of advanced AI, and it has already laid out a roadmap to achieve AI domination by 2030. Contrary to the U.S., in China, the government exerts full control over AI companies. BAT are legally obliged to put all of their data at the disposal of authorities and enable the state to conduct mass surveillance and control citizens through their technologies. One of the best-known instances of the government’s initiatives is the infamous Sesame social credit system, which employs AI algorithms and the platforms of BAT to keep a close watch on the behavior of Chinese citizens. The system is supposed to incentivize good behavior, such as abiding by the rules and keeping a good banking record, while punishing bad behavior such as playing video games late into the night and jaywalking. But it is also a tool to keep an eye on political dissidents and marginalize those who are not aligned with the views of the ruling party. What’s in it for BAT? “State-level surveillance is enabled by the BAT, who are in turn emboldened through China’s various institutional and industrial policies,” Webb writes. This is why you see the flourishing of the three companies, which together have a vast share of China’s economy. Webb also spells another warning that is often ignored: the AI brain drain caused by Chinese initiatives. “China is actively draining professors and researchers away from AI’s hubs in Canada and the United States, offering them attractive repatriation packages,” she writes. “There’s already a shortage of trained data scientists and machine-learning specialists. Siphoning off people will soon create a talent vacuum in the West. By far, this is China’s smartest long-term play—because it deprives the West of its ability to compete in the future.” What happens if we don’t fix AI? “AI’s consumerism model in the United States isn’t inherently evil. Neither is China’s government-centralized model. AI itself isn’t necessarily harmful to society,” Webb writes. “However, the G-MAFIA are profit-driven, publicly traded companies that must answer to Wall Street, regardless of the altruistic intentions of their leaders and employees. In China, the BAT are beholden to the Chinese government, which has already decided what’s best for the Chinese.” And what’s best for Wall Street and the Chinese government is not necessarily in the best interests of humanity. As we’ve discussed, we’re already bleeding from many paper cuts, and the situation will gradually grow worse if AI research and development is not steered in the right direction. “It’s difficult to wrap our heads around potential crises and opportunities before they’ve already happened, and that’s why we tend to stick to our existing narratives. That’s why we reference killer robots rather than paper cuts. Why we fetishize the future of AI rather than fearing the many algorithms that learn from our data,” Webb warns. In The Big Nine, Web lays out three potential roadmaps for the future of AI, two of which are disastrous. n the “pragmatic scenario,” AI stakeholders will acknowledge problems but will only make minor changes. In the U.S., the government and G-MAFIA will not come together to make sure AI benefits everyone. The paper cuts will increase. Adversarial attacks, reward-hacking, incomplete AI systems, and algorithmic discrimination will continue to harm users across the world. Worried or not, the companies creating AI systems won’t do much because they are under the constant pressure of getting products to the market. People will lose ownership of their data, their privacy, their identities. The social and economic divide will continue to grow. Technological and economic power will continue to consolidate in very few companies, who will continue to compete for user attention and monetization potential and will bombard us with ads everywhere. “Rather than bringing us together, AI has effectively and efficiently split us all apart,” Webb warns. Meanwhile, in China, the government will continue to exert centralized control and use AI to consolidate its power. It will use its leverage to apply AI to its security and military apparatus and move toward developing human-level AI. It will eventually launch subtle AI-powered attacks and take the digital infrastructure of the U.S. as hostage. “Humanity is on the brink of a terrifying ASI [artificial super intelligence] that has been developed by a country that does not share our democratic values and ideals,” Webb warns. In the “catastrophic scenario,” the G-MAFIA will continue their unabated competition and will eventually establish their own version of China’s social score on citizens in different countries. People lack power to decide the smallest things in their lives. The G-MAFIA will cause a divide among the people as everyone becomes locked into one of few incompatible platforms that expand on all aspects of their lives. AI will influence the social fabric. “America and its allies, who once celebrated in the G-MAFIA’s successes, are living under a system of AI totalitarianism,” Webb writes. “Citizens throughout China and all the countries supported by China’s direct investment and infrastructure find that they, too, are living under a pervasive apparatus of AI-powered punishment and reward.” China expands its AI dominion by exporting its technology and surveillance capabilities to other countries. Those countries inevitably become satellite states of the Chinese Communist Party and part of its AI-powered surveillance regime. The adversity between China and U.S. allies reaches a head when one of the parties develops super intelligent AI and annihilates the other. GAIA: The plan to set AI on the right course Not all is gloomy. In her book, Webb provides a series of steps that can set AI on the right course and make sure it will benefit all of humanity. Key among them is the formation of the Global Alliance on Intelligence Augmentation, or GAIA, an international body that includes AI researchers, sociologists, economists, game theorists, futurists, and political scientists from all member countries. GAIA will also represent all socioeconomic, gender, race, religious, political, and sexual diversities. “[GAIA members] agree to facilitate and cooperate on shared AI initiatives and policies, and over time they exert enough influence and control that an apocalypse—either because of AGI, ASI, or China’s use of AI to oppress citizens—is prevented,” Webb writes. Member nations of GAIA will collaborate to develop AI frameworks, standards, and best practices. Webb describes it as a “a new social contract between citizens and the Big Nine” that is “based on trust and collaboration.” Such a body can bring about the “optimistic scenario,” in which AI is a force for good. Citizens benefit from transparency, standardized protocols, choice of technology, ownership of data. AI complements human cognition, provides predictive care to everyone, fights climate change, finds and filters out misinformation on social media, and more. Under the guidance of GAIA, AI brings all people together. All states, including China, will be invited to join the alliance. If they don’t, their ambitions to extend its state of surveillance will be held in check by a powerful global coalition that uses its technological and economical advantage for the good of all humanity. No government will be able to prey on poor countries to expand its own AI dominion. GAIA will provide a fairer alternative in which no state is forced to trade the wellbeing of their citizens for survival. That sounds easier said than done, but as Webb explains in her book, it is a path that is build one step, one brick, one pebble at a time. The Big Nine will play a crucial part in the future of AI, but we can’t let them do it alone. “Safe, beneficial technology isn’t the result of hope and happenstance. It is the product of courageous leadership and of dedicated, ongoing collaborations,” Webb writes. “The Big Nine are under intense pressure—from Wall Street in the United States and Beijing in China—to fulfill shortsighted expectations, even at great cost to our futures. We must empower and embolden the Big Nine to shift the trajectory of artificial intelligence, because without a groundswell of support from us, they cannot and will not do it on their own.” Source
  17. The projects aim to build more inclusive datasets Microsoft has announced a series of new efforts to make AI systems more inclusive of people with disabilities. The initiatives aim to combat the so-called “data desert” that’s left machine learning algorithms without enough relevant training data to be effective for people with conditions such as ALS. One project, called Object Recognition for Blind Image Training (ORBIT), seeks to change this by building a new public dataset of videos submitted by people who are blind or have low vision. The data will be used to develop algorithms for smartphone cameras that recognize important personal objects, such as someone’s cell phone or face mask. Microsoft is also collaborating with Team Gleason, an organization that supports people living with Amyotrophic Lateral Sclerosis (ALS), to create an open dataset of facial imagery of people with the neurodegenerative disease. This will be used to make computer vision and machine learning algorithm better at recognizing people with ALS symptoms. A third project led by VizWiz is developing a public dataset to train, validate, and test image captioning algorithms. They’ll be used to provide data about things that people with low vision photograph, such as how much of an over-the-counter medicine they should take. The team is also developing algorithms that instantly recognize when an image submitted is unclear and suggest how to retake it. “Often it’s not obvious what is meaningful to people, and that’s why it’s so important not just to design for — but design these technologies with — people who are in the blind and low vision community,” said Danna Gurari, who developed the VizWiz dataset. These initiatives address a few of the many ways in which mainstream algorithms don’t work well for people with disabilities because they aren’t trained on inclusive data. These oversights could potentially cause a self-driving car to not identify someone in a wheelchair as an object to avoid, or lead a predictive hiring system to lower the rank of job candidates with disabilities as they differ from the product’s idea of what a “successful” employee sounds and looks like. Building more inclusive and publically-available datasets will help reduce these risks and improve technologies designed to support people with disabilities. Source
  18. JERUSALEM (Reuters) - Intel Corp’s Habana Labs business said on Wednesday it would take time to gain market share from Nvidia in cloud and data centre computing but its deal this week with Amazon Web Services (AWS) was a solid first step. Intel in December bought Israel-based artificial intelligence firm Habana for about $2 billion, seeking to expand its AI portfolio to bolster its data-centre business. Habana’s Gaudi AI training processor, launched in June 2019, has faster processing speeds to compete with similar products from Intel rival Nvidia. “We have to realise that we’re starting from zero and Nvidia is 100%,” said Eitan Medina, Habana’s chief business officer, who said that having AWS as its first customer was very important. “The uphill battle or the process of taking market share has to go through convincing end developers to try it out,” he told reporters. “We are making the dent at the most important place. We’re starting with a very big guy that has the longest experience ... It will take time but I believe we’re on the right path.” Medina declined to comment on whether Habana was negotiating other deals. Habana on Tuesday said its Gaudi processors will power AWS’s Amazon Elastic Compute Cloud “instances” for machine learning workloads, in Habana’s first deal for its Gaudi chips. Amazon is seeing growing demand for its cloud tools during the coronavirus pandemic. These chips, Intel said, would give 40% better price performance than current graphics processing. Medina said that the advantages of Gaudi AI chips were efficiency and allowing for lower capital and operating expenses that in turn could give AWS and others the ability to lower prices for customers for server time. “We are now starting so it will depend on the combination of how we will execute and how important is it for users to lower their cost and to have alternatives to GPUs (graphics processing units),” Medina said. “Our total available market is 100% of AI.” Source
  19. AI excels at interpreting high volume, high velocity, complex data – which is just the ticket here Computer scientists have long been interested in the human body's immune system. It fights off an incredible range of attacks spanning from the common cold to flu, measles, and worse. As companies face a rising tide of cyber attacks, a new approach to email defence developed by cybersecurity company Darktrace uses our own ability to fight off external threats and replicates this ‘immune system’ approach in the digital world. Just as human beings have a protective skin to ward off germs, organizations today use a series of perimeter solutions to stop threats at the border. Anti-virus, next-generation firewalls and endpoint solutions are all examples of this. Inevitably though, every now and again a threat gets through. In the human body, this is when the immune system kicks in: a complex ecosystem of cells have evolved over time and learned what belongs in the body and what doesn't, and these cells into action to counter any unusual activity indicative of a threat. In the digital world, supervised and unsupervised machine learning work in tandem to identify deviations from the normal ‘pattern of life’ of the digital ecosystem, and like antibodies, action a response to contain the assault. This immune system approach to cyber defence has been adopted by over 4,000 organisations around the world who found that their protective skin wasn’t sufficiently keeping them safe from attack. And now, this layered AI approach is being used to identify malicious emails and protecting corporate inboxes from harm. Email security: The legacy approach Legacy email scanners depend on rules and pre-prepared signatures to catch suspicious emails. Cyber criminals know this and adapt quickly, using a variety of techniques including short-lived email campaigns, bulk registration to keep domains fresh, and 'snow shoe' spamming to stop themselves showing up on anti-phishing radars. Traditional email security vendors try to adapt with newer technologies like sandboxes, which run suspicious attachments in a controlled environment to see what they do. These new technologies struggle in a game of constant one-upmanship. True to form, online crooks developed ways for malicious attachments to detect when they're sandboxed and change their behaviour accordingly. The struggle to spot malicious emails is compounded by another problem: old-school email scanning tools tend to get only one shot at catching phishing email and spam. They try to catch an increasing flow of toxins at the perimeter. If just one malicious email gets through that single line of defence, the game is done. The company stands a good chance of getting sick. The problem is that the move to cloud and remote working has been dissolving the perimeter for a long time. A new, multi-layered approach Artificial intelligence changed the game by reframing how defence tools see incoming threats like emails. It's a probabilistic technology that doesn't work in absolutes. Rather than using a single, rigid set of rules that works on a restricted set of inputs, it considers things using a complex statistical model. That's especially useful for complex data sets with lots of nuance, which is exactly what flows through the average corporate mailbox. While some tools use just one form of AI, there are actually several. Defenders like Darktrace can combine to identify not just when an email is malicious, but why it is malicious, so that it can action the appropriate response. The most common model in use today is supervised machine learning. In this model, someone has to teach the software what to look for. If you want a computer to recognise a hot dog, then you have to show it lots of hot dog pictures, and lots of pictures of other things that aren't hot dogs. Supervised learning often supports computer vision applications, but it is also useful in cybersecurity-related areas like scanning for malicious attachments. By training on lots of malicious and legitimate samples, it can learn what each traditionally looks like. Mimicking the brain Supervised learning typically uses a neural network, which uses computers to mimic a brain's neurons. Each digital neuron takes an input and then either amplifies it, or not, before passing it onto the next. As the input passes through multiple layers of neurons, the neural net ignores some elements of the original input and focuses on others, enabling it to recognise things. In early neural nets, there were just a handful of layers of these neurons. The cloud's scalability, along with the advent of GPUs for AI training, has created deep learning networks with many more layers. These can handle even more complex, nuanced data sets. In computer vision, they can tell you that a picture contains a Caucasian hand holding a hot dog with relish and mustard. In cybersecurity, they can score DNS data to determine anomalies. Supervised machine learning has its place, but it also has its challenges. Someone has to label all those hot dog and not-dog pictures, or suspicious and legitimate emails. That needs a lot of human effort, or an existing data set that could quickly fall out of date. There's also a danger of human error and bias (people may disagree on what constitutes spam and label things differently). Another problem is overfitting. You can train a supervised learning model to be too choosy when it identifies things, requiring something to be just right for a match. It could wave through a basic hot dog but misidentify a chilli dog or a Montreal dog smothered in onions. When your hot dogs are phishing emails, that's a problem. Finally, supervised learning only knows what you show it. Present it with a new kind of hot dog or malicious email that it hasn't seen before and it will probably misidentify it. Unsupervised learning Unsupervised learning provides an extra layer of protection by taking a different approach to hunting down digital pathogens. Instead of looking at prior examples of things to detect, it teaches itself without human input. This is based on Bayesian mathematics, which adjusts the probability that something is a threat based on continual observation. It learns from new evidence. When applied to cybersecurity, this approach to AI spots correlations in new data, clustering it into patterns that give it a sense of normal digital behaviour and email trends. This brings several key benefits. First, it removes the error and bias that might find their way into some supervised learning data sets. Second, it can mould itself to each company's unique environment and behaviours. Because it doesn't rely on empirical data, unsupervised learning can also identify email campaigns that have not been seen before. In a world where the average lifecycle of an attack is reduced to hours, not days, this ability to detect and protect against novel attack infrastructure is crucial. Darktrace uses both supervised and unsupervised learning techniques in a multilayered approach to spot digital toxins. It uses malicious emails gathered via its global cybersecurity network for supervised learning, helping train deep learning models against known malicious assets. It combines this with over 60 unsupervised learning algorithms that compete with each other to find anomalous behaviour based on evidence ranging from device activity to senders, recipients, IP addresses, domains, and the timing of events. This enables it to detect email patterns that belong in context, and those that don't. This self-learning approach means the system takes around an hour to install, either on-premises as a hardware appliance or virtually, in Microsoft 365 or G Suite. Unlike traditional email security gateways, the tool doesn't sit inline, interpreting emails before passing them through. Instead, it journals from the email provider, quickly examining emails and telling the email server to block them if necessary. While traditional gateways only get one shot at spotting suspicious emails, this approach enable the AI-based tool to fold all incoming emails into its multi-layered AI model. A mail that seems innocuous now might only reveal itself as part of a campaign as more evidence comes to light. This is why it's important not just to explore emails in their current context, but to revisit them constantly as part of a broader historical corpus of email communication and events across the rest of the digital ecosystem. AI excels at interpreting high volume, high velocity, complex data. Each AI model has its pros and cons, but by combining more than one, companies can take a multi-layered approach that will give them more protection. This maps closely to the defence-in-depth models proposed by modern cybersecurity experts. In an environment where the volume, velocity, and complexity of malicious email continues to increase, we may have developed it just in time. Source
  20. DeepMind AI handles protein folding, which humbled previous software Google's AI specialists tackle biology's toughest computational problem. Enlarge / Proteins rapidly form complicated structures which had proven difficult to predict. Argonne National Lab / Flickr 68 with 47 posters participating Today, DeepMind announced that it has seemingly solved one of biology's outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It's a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days. The limitations of the system aren't yet clear—DeepMind says it's currently planning on a peer-reviewed paper and has only made a blog post and some press releases available. But the system clearly performs better than anything that's come before it, after having more than doubled the performance of the best system in just four years. Even if it's not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology. Between the folds To make proteins, our cells (and those of every other organism) chemically link amino acids to form a chain. This works because every amino acid shares a backbone that can be chemically connected to form a polymer. But each of the 20 amino acids used by life has a distinct set of atoms attached to that backbone. These can be charged or neutral, acidic or basic, etc., and these properties determine how each amino acid interacts with its neighbors and the environment. The interactions of these amino acids determine the three-dimensional structure that the chain adopts after it's produced. Hydrophobic amino acids end up on the interior of the structure in order to avoid the watery environment. Positive and negatively charged amino acids attract each other. Hydrogen bonds enable the formation of regular spirals or parallel sheets. Collectively, these shape what might otherwise be a disordered chain, enabling it to fold up into an ordered structure. And that ordered structure in turn defines the behavior of the protein, allowing it to act like a catalyst, bind to DNA, or drive the contraction of muscles. Determining the order of amino acids in the chain of a protein is relatively easy.—they're defined by the order of DNA bases within the gene that encode the protein. And as we've gotten very good at sequencing entire genomes, we have a superabundance of gene sequences and thus a huge surplus of protein sequences available to us now. For many of them, though, we have no idea what the folded protein looks like, which makes it difficult to determine how they function. Given that the backbone of a protein is very flexible, nearly any two amino acids of a protein could potentially interact with each other. So figuring out which ones actually do interact in the folded protein, and how that interaction minimizes the free energy of the final configuration, becomes an intractable computational challenge once the number of amino acids gets too large. Essentially, when any amino acid could occupy any potential coordinates in a 3D space, figuring out what to put where becomes difficult. Despite the difficulties, there has been some progress, including through distributed computing and gamification of folding. But an ongoing, biannual event called the Critical Assessment of protein Structure Prediction (CASP) has seen pretty irregular progress throughout its existence. And in the absence of a successful algorithm, people are left with the arduous task of purifying the protein and then using X-ray diffraction or cryo electron microscopy to figure out the structure of the purified form, endeavors that can often take years. DeepMind enters the fray DeepMind is an AI company that was acquired by Google in 2014. Since then, it's made a number of splashes, developing systems that have successfully taken on humans at Go, chess, and even StarCraft. In several of its notable successes, the system was trained simply by providing it a game's rules before setting it loose to play itself. Tthe system is incredibly powerful, but it wasn't clear that it would work for protein folding. For one thing, there's no obvious external standard for a "win"—if you get a structure with a very low free energy, that doesn't guarantee there's something slightly lower out there. There's also not much in the way of rules. Yes, amino acids with opposite charges will lower the free energy if they're next to each other. But that won't happen if it comes at the cost of dozens of hydrogen bonds and hydrophobic amino acids sticking out into water. So how do you adapt an AI to work under these conditions? For their new algorithm, called AlphaFold, the DeepMind team treated the protein as a spatial network graph, with each amino acid as a node and the connections between them mediated by their proximity in the folded protein. The AI itself is then trained on the task of figuring out the configuration and strength of these connections by feeding it the previously determined structures of over 170,000 proteins obtained from a public database. When given a new protein, AlphaFold searches for any proteins with a related sequence, and aligns the related portions of the sequences. It also searches for proteins with known structures that also have regions of similarity. Typically, these approaches are great at optimizing local features of the structure but not so great at predicting the overall protein structure—smooshing a bunch of highly optimized pieces together doesn't necessarily produce an optimal whole. And this is where an attention-based deep-learning portion of the algorithm was used to make sure that the overall structure was coherent. A clear success, but with limits For this year's CASP, AlphaFold and algorithms from other entrants were set loose on a series of proteins that were either not yet solved (and solved as the challenge went on) or were solved but not yet published. So there was no way for the algorithms' creators to prep the systems with real-world information, and the algorithms' output could be compared to the best real-world data as part of the challenge. AlphaFold did quite well—far better, in fact, than any other entry. For about two-thirds of the proteins it predicted a structure for, it was within the experimental error that you'd get if you tried to replicate the structural studies in a lab. Overall, on an evaluation of accuracy that ranges from zero to 100, it averaged a score of 92—again, the sort of range that you'd see if you tried to obtain the structure twice under two different conditions. By any reasonable standard, the computational challenge of figuring out a protein's structure has been solved. Unfortunately, there are a lot of unreasonable proteins out there. Some immediately get stuck into the membrane; others quickly pick up chemical modifications. Still others require extensive interactions with specialized enzymes that burn energy in order to force other proteins to refold. In all likelihood, AlphaFold will not be able to handle all of these edge cases, and without an academic paper describing the system, the system will take a little while—and some real-world use—to figure out its limitations. That's not to take away from an incredible achievement, just to warn against unreasonable expectations. The key question now is how quickly the system will be made available to the biological research community so that its limitations can be defined and we can start putting it to use on cases where it's likely to work well and have significant value, like the structure of proteins from pathogens or the mutated forms found in cancerous cells. DeepMind AI handles protein folding, which humbled previous software
  21. Despite protests from its staff in the past leading to its pulling out from controversial US Government projects, Google has now made a deal for its artificial technology to be used by the Trump administration to fortify the US-Mexico border, The Intercept reports, based on documents obtained under a FOIA request. The website said the US Customs and Border Protection had decided to use Google Cloud technology in August and deploy it through the CBP Innovation Team which is working to devise a virtual wall that has both surveillance towers and drones. The Intercept's long and detailed investigative report is well worth a read. In April 2018, Google came under pressure from some of its staff after it became known that it had signed up to assist a Pentagon mission known as Project Maven to provide the military with AI technology to better identify images on the battlefield. More than 3000 employees signed a letter that was made public. In May, a dozen Google employees quit the company to protest against its involvement in Project Maven. In May, a dozen Google employees quit the company to protest against its involvement in Project Maven. Google later said that it would not renew the contract to work on Maven, but gave no assurance that it would keep out of future similar projects. Intercept reporters Sam Biddle and Lee Gang wrote that the documents they had obtained showed that Google's work with the CBP was being done through a third-party firm named Thundercat Technology. The FOIA request was made by research group Tech Inquiry which has been set up by Jack Poulson, an ex-research scientist at Google who quit over issues of ethics. The report said the deal also brought Google into the circle of a tech executive who was one of US President Donald Trump's biggest boosters. This man, Palmer Luckey, runs a start-up known as Anduril Industries which operates sentry towers at the US-Mexico border. At least two other big tech firms have faced protests from their employees over working with US border control and law enforcement authorities. In June 2018, Microsoft employees protested against the company's deal with the Immigration and Customs Enforcement in the wake of the agency's separation of migrant children from their parents at the Mexico-US border. The same month, Amazon employees wrote to their boss, Jeff Bezos, telling him not to sell the company's Rekognition facial recognition software to law enforcement agencies and to cancel a contract for hosting data-mining company Palantir on its cloud. Biddle and Fang wrote that they has asked Google, Anduril, CBP and Thundercat for comment but received none. “This cannot be viewed as a neutral business relationship. Big Tech is providing core infrastructure for racist and harmful border regimes,” Meredith Whittaker, a former Google AI researcher who previously helped organise employee protests against Project Maven and went on to co-found NYU’s AI Now Institute told The Intercept. “Without these infrastructures, Palantir and Anduril couldn’t operate as they do now, and thus neither could ICE or CBP. It’s extremely important that we track these enabling relationships, and push back against the large players enabling the rise of fascist technology, whether or not this tech is explicitly branded ‘Google'.” Source
  22. Amazon today announced a new Alexa feature for U.S.-based English-language users that enables devices powered by the assistant to infer latent goals, or goals implicit in requests but not directly expressed. For instance, if a user says “How long does it take to steep tea?,” Alexa might follow up with “Five minutes is a good place to start” and the question “Would you like me to set a timer for five minutes?” According to Amazon, dialog transitions like these require a number of AI algorithms under the hood. A machine learning-based trigger model decides whether to anticipate a latent goal by factoring in aspects of the context including text of a user’s session and whether the user has engaged with Alexa’s suggestions in the past. If the model finds the context suitable, the system suggests an Alexa app to address the latent goal. Those suggestions are based on relationships learned by the latent-goal discovery model, according to Amazon. (For example, the model might discover that users who ask how long tea should steep frequently follow up by setting a timer.) The latent-goal discovery model analyzes several features of user utterances including pointwise mutual information, a measure of the likelihood of an interaction in a context relative to its likelihood across Alexa traffic. Deep learning-based sub-modules assess additional features, such as whether a user was trying to rephrase or issue a command or whether the direct and latent goals share entities or values (like the time required to steep tea). Over time, the discovery model improves its predictions through active learning, which identifies sample interactions that are particularly informative during fine-tuning. In the next portion of Alexa’s latent goal inference pipeline, a semantic-role labeling model looks for named entities and other arguments from the current conversation including Alexa’s own responses. Context carryover models transform these entities into a structured format the follow-on app can understand, even if it’s a third-party app. Lastly, through bandit learning, in which machine learning models track whether recommendations are helping or not, underperforming experiences are automatically suppressed before they reach Alexa-enabled devices. Amazon says that latent goal inference requires no additional effort from app developers to activate. However, developers can make their apps more visible to the discovery model by using Amazon’s Name-Free Interaction Toolkit, which provides natural hooks for interactions between apps. “Amazon’s goal for Alexa is that customers should find interacting with her as natural as interacting with another human being,” Amazon wrote in a blog post. “While [apps] may experience different results, our early metrics show that latent goal [inference] has increased customer engagement with some developers’ apps.” Latent goal inference builds on Natural Turn Taking, an Alexa feature that lets users converse with the assistant without having to repeat a wake word. (Three AI models run in parallel to power Natural Turn Taking, which will initially only be available in English when it launches sometime next year.) Earlier this summer, Amazon launched another conversational capability in Alexa Conversations, which aims to make it easier for developers to integrate conversational experiences into apps. Source
  23. Microsoft develops new machine learning model to detect password spray attacks Over the past few weeks, Microsoft has been promoting digital security efforts as a part of its observance of the National Cyber Security Awareness Month (NCSAM) in October. It has announced new initiatives to promote cybersecurity awareness, unveiled Zero Trust Deployment Center, released an Adversarial ML Threat Matrix, and launched a fairly successful offensive against the malicious Trickbot botnet. Now the company has stated that it has developed a new machine learning-based algorithm that detects password spray attacks with considerably improved performance than its previous mechanism. Image via Abine For those unaware, a password spray is a relatively crude and common form of cyberattack in which a malicious actor attacks thousands of IPs with a few commonly used passwords rather than trying numerous passwords against a single user. While this indicates that the success rate per account is quite nominal, it also means that the attack is very difficult to detect since it is spread out with such uneven consistency that a tenant may dismiss a couple of unsuccessful login attempts per account as part of the regular login pattern. So, this attack can only be detected across multiple tenants if you notice a single hash failing across numerous accounts. To counter password spray attacks, Microsoft previously built a heuristic mechanism in which the company observed "the core failure in the system in... worldwide traffic" and notified organizations at risk. Now, the company has improved this mechanism by training a new supervised machine learning algorithm that uses features such as IP reputation, unfamiliar login properties, and other account deviations to detect when a tenant is under attack from password spray. Microsoft claims that its new model has a 100% increase in recall compared to the heuristic algorithm. This means that it detects twice the number of compromised accounts. Furthermore, it also has a 98% precision, which means that if the model claims that an account has fallen victim to password spray, then this is almost certainly true across all cases. The new model will be available soon to Azure AD Identity Protection customers, who will be able to utilize it in the portal and APIs for Identity Protection. Microsoft develops new machine learning model to detect password spray attacks
  24. The battle of the algorithms has begun It was 2017 and a hacker had gained access to a digital system at an organization in India. At first it seemed like just a normal intrusion - the kind that happens thousands of times each day. But this one was different. When it examined the hacking incident, cybersecurity company Darktrace found that the attacker had analysed the organization’s digital behaviour so that they could mimic it to stay hidden. They hadn't done this manually - they'd used machine learning software to do it for them. Darktrace spotted the attack because it knew what normal behaviour looked like for that organization. The cyber security company uses AI to understand organizations’ ‘patterns of life’ to spot subtle deviations indicative of a cyber-threat. The incident in India indicated a worrying trend: attackers were beginning to use AI too. Changing the balance of power AI is a game changer in the ‘cat and mouse’ conflict between defenders of critical digital environments, and hackers looking to attack those systems. Traditional attacks have relied on human hackers who do their best to navigate a company’s digital defenses. Now, just as legitimate businesses use AI to disrupt key sectors, hackers are using AI to automate their attacks, driving new efficiencies into their operations. Welcome to the world of offensive AI. The use of AI for cyber-attacks looks set to tip the balance even further in the attackers' favor, and it's happening more quickly than you might think. In its report on offensive AI, Forrester asked over 100 cybersecurity decision makers about the security threats they face. Almost nine in ten thought it inevitable that these attacks would go mainstream and almost half expected to see some this year. Weaponising AI Intelligence agencies have already clued into AI's potential as a hacking tool. The Defense Advanced Research Projects Agency (DARPA) held an AI-powered hacking challenge in 2016 to explore how the technology could automate both attack and defence techniques. This Cyber Grand Challenge was only the beginning. Since then, it has explored human-assisted AI hacking techniques in a project called CHESS (Computers and Humans Exploring Software Security). Why are attackers so attracted to AI as an offensive tool, particularly when attacks without it are still working? To a cyber-criminal ring, the benefits of leveraging AI in their attacks are at least four-fold: It gives them an understanding of context It helps to scale up operations It makes detection and attribution harder It ultimately increases their profitability Attackers have finite resources, like everyone else, so they are always on the hunt for technologies that can enable them to do more with less. Using AI to automate these attacks gives them a virtual digital army of attackers, all operating at computer speed. Forrester found that 44 percent of organizations already take over three hours to spot an infection, while almost 60 percent took more than three hours to understand its scope. An attacker using AI can infect a system in seconds and increase that time gap still further, giving them more time to locate and steal valuable data. AI-powered reconnaissance Offensive AI and automation will touch every part of the attack lifecycle, from initial reconnaissance through to the final stage: usually either ransomware or data exfiltration. During the reconnaissance phase, automated bots can sift through thousands of social media accounts at machine speed, autonomously selecting prime targets for spearphishing. Chatbots can interact with these employees via social media to gain trust, building relationships with them. The technology behind this is improving daily - for example, consider the success of OpenAI's GPT-3 text generation model. Before sending spearphishing emails at scale, you might need to put a face to a name on your newly created email account. Deepfake faces, created by generative adversarial neural networks, are available online for free. They also make great profile pictures for fake people on social media sites like LinkedIn. Deepfakes offer another strong attack vector for cybercriminals. Not all the attacks that use them are visual. Fraudsters used AI to mimic the voice of an energy company's German chief executive during a phone conversation with its UK-based CEO. The scam was good enough to fleece the company for €220,000. Things will only get worse: Forrester predicts that deepfakes will cost businesses $250m this year. Once upon a time, spearphishers would have to spend hours tracking and profiling specific targets before trying to scam them. AI's ability to analyse and mimic language automatically means it can launch attacks in volume via social media sites. Security company ZeroFOX demonstrated this, writing a neural network-based prototype tool that 'read' target Twitter posts and then crafted convincing-sounding tweets targeting specific people. Those tweets could easily persuade people to download malicious documents and infect a computer. Hiding in plain sight These capabilities now allow attackers to launch targeted, sophisticated email attacks that appear indistinguishable from legitimate communication. Relying on employees to tell friend from foe becomes a lost cause. It only takes one malicious email to land for attackers to carry the keys to the kingdom. The next stage involves moving laterally through the network to find other machines to exploit, latching onto different parts of the company's infrastructure. The key is to do it stealthily, and AI can help here too. Empire is a post-exploitation hacking framework that makes it easier for attackers to communicate with their malware once it's on a system. According to Darktrace, it also enables them to hide their activities in plain sight by restricting command and control traffic to periods of peak activity. Malware can also use AI to hide itself by making its behaviour unpredictable. In 2018 IBM Research announced DeepLocker, a system that embedded its payload in an innocuous-looking application like videoconferencing software. It used a deep neural network to decide when it would trigger its payload, making it difficult to test in a sandbox. In tests, IBM programmed the AI to only trigger if it recognised the face of a specific system user. In the future, automated decision making tools could move around a system without any guidance from the hackers at all, minimising or even eliminating telltale command and control traffic. After attaching themselves to a range of systems, it's time for the hacker to elevate their privileges. That requires login credentials. Password cracking has traditionally been a brute force affair, involving dictionary attacks against lists of known words and obvious alphanumeric combinations. Attackers can refine those attacks by using keywords that are more relevant to that user or organisation. To do that, it needs to read the target's website. Are key portions of the site protected by a CAPTCHA? No problem. An AI-powered CAPTCHA breaker can gain access to sites by mindlessly selecting pictures of traffic lights or recognising text (there are online APIs available for under a dollar a go). Once in, it's easy to spider the site using a unique word extraction tool like CeWL. While this tool doesn't use AI, there are other proofs of concept that take things a stage further. Researchers at the Stevens Institute of Technology in Hoboken, New Jersey created PassGAN, a tool that feeds unique word lists into generative adversarial networks (the same technique used in deepfakes) to generate large volumes of likely passwords. Where do we go from here? The difficulty in attribution of cyber-crime makes it hard to tell when an attack doesn’t have a human behind the keyboard. However, there are hallmarks, such as attacks that seem to blend into the environment, and malware that is aware of sandboxing environments and changes its behavior accordingly. And according to Darktrace, those hallmarks are becoming more and more common in today’s cyber-attacks. As AI-powered attacks increase, legacy security tools won't be able to see them because they use inflexible rule sets. AI's adaptive algorithms will calculate ways to evade them. Instead, we must prepare to fight fire with fire, countering AI's weaponisation by using it to defend our own networks. Over 80 percent of Forrester's respondents believe that we need tools that 'think' the same way these new AI-powered attacks do. Instead of using rules to identify pre-defined signs of malicious behaviour, AI-powered cybersecurity looks for anything that deviates from behaviour it perceives as normal. To do this, the technology looks beyond individual data points such as email content, domains, and IP addresses. It dynamically assesses hundreds of data points, assessing them not against static rules but as part of a broader statistical model that takes the full history of events across the email, cloud and network realms into account. By learning the ‘patterns of life’ for every user and device in an organization, defensive AI can establish what is and what is not ‘normal’ behaviour. When it sees activity that deviates from those patterns of life, it raises the alarm. In order to cause damage, attackers - whether human or AI - must by definition do something out of the ordinary. Its constantly evolving understanding of what ‘ordinary’ looks like is what helps defensive AI fight back. Adaptability is a key part of this approach because the technology is constantly retraining to accommodate new data. That's important because what's normal for an organization one month may not be normal three or six months later, especially after a global event like a pandemic that seismically alters working patterns. And visibility is crucial in deciding the ultimate winner in this battle of algorithms. AI cyber defense has an overview of the entire digital environment - not just a subsection of it. The attacker can make an inroad or inroads but ultimately will never have that complete visibility, and so defenders’ understanding of ‘normal’ will always be more precise, more up-to-date and more informed. Part of any combat strategy involves looking not just at today's fight, but tomorrow's. Smart defenders will anticipate the next generation of attack weapons and prepare appropriate defenses. At a point where we're facing a step change in capability, that forward-looking approach is more important than ever. Source
  25. A Council of Citizens Should Regulate Algorithms To ensure our AI-driven future is just and equitable, we should borrow from ancient Athens. Athens’ democracy reminds us that we have been outsourcing governance for two and a half millennia, first to kings, then to experts, and now to machines.Illustration: WIRED Staff; Getty Images Are machine-learning algorithms biased, wrong, and racist? Let citizens decide. Essentially rule-based structures for making decisions, machine-learning algorithms play an increasingly large role in our lives. They suggest what we should read and watch, whom we should date, and whether or not we are detained while awaiting trial. Their promise is huge–they can better detect cancers. But they can also discriminate based on the color of our skin or the zip code we live in. Despite their ubiquity in society, no real structure exists to regulate algorithms' use. We rely on journalists or civil society organizations to serendipitously report when things have gone wrong. In the meantime, the use of algorithms spreads to every corner of our lives and many agencies of our government. In the post-Covid-19 world, the problem is bound to reach colossal proportions. A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like. We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off. Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance. Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change, and even to shape constitutions. Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions. The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice. These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front. A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency. It could evaluate, as OpenAI recommends, a variety of issues including the level of privacy protection, the extent to (and methods by) which the systems were tested for safety, security, or ethical concerns, and the sources of data, labor, and other resources used. Reports like the one OpenAI just released provide an important first step in the process of getting industry buy-in. The report highlights both the risks of unregulated development of the technology and the benefits of an inclusive process to devise regulatory bodies. For example, industry could play a role in the selection process or in the choice of material available to the councillors, or by providing expert advice. The council would be a fair and efficient response to the question of how to resolve the societal trade-offs that algorithms create. Unlike proposed technocratic solutions and traditional auditing structures, the council would expand the range of possible solutions to the problems that algorithms create, enhance democratic accountability, and foster citizen participation and learning. The erosion of commitments to democratic norms and institutions around the world calls for new ideas. The time is ripe for considering creative institutional solutions to tackle some of the greatest challenges society faces. Athens’ democracy reminds us that we have been outsourcing governance for two and a half millennia, first to kings, then to experts, and now to machines. This is an opportunity to reverse the trend. A Council of Citizens Should Regulate Algorithms
  • Create New...