Jump to content

Search the Community

Showing results for tags 'artificial intelligence'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 18 results

  1. Boffins spill beans on super 'scope machine-learning tech Google Health's so-called augmented-reality microscope has proven surprisingly accurate at detecting and diagnosing cancerous tumors in real time. The device is essentially a standard microscope decked out with two extra components: a camera, and a computer running AI software with an Nvidia Titan Xp GPU to accelerate the number crunching. The camera continuously snaps images of body tissue placed under microscope, and passes these images to a convolutional neural network on the computer to analyze. In return, the neural net spits out, in real time allegedly, a heatmap of the cells in the image, labeling areas that are benign and abnormal on the screen for doctors to inspect. Google's eggheads tried using the device to detect the presence of cancer in samples of breast and prostate cells. The algorithms had a performance score of 0.92 when detecting cancerous lymph nodes in breast cancer and 0.93 for prostate cancer, with one being a perfect score, so it’s not too bad for what they describe as a proof of concept. Details of the microscope system have been described in a paper published in Nature this week. The device is a pretty challenging system to build: it requires a processing pipeline that can handle, on the fly, microscope snaps that are high resolution enough to capture details at the cellular level. The size of the images used in this experiment measure 5,120 × 5,120 pixels. That’s much larger than what’s typically used for today's deep learning algorithms, which have millions of parameters and require billions of floating-point operations just to process images as big as 300 pixels by 300 pixels. To cope with these larger images, the convolutional neural network, which is based on Google's Inception V3 architecture, breaks them up into little patches that are analysed individually. It also takes time to train the technology to detect and classify cancerous cells, with the help of humans, from pictures of varying levels of quality. All of this then has to work in real time during the inference stage in order for it to be useful for doctors: they'd like to know as soon as possible, not hours or days later. “The latency quantifies the absolute computational performance of the ARM [augmented-reality microscope] system,” the researchers wrote. Although they used it to study cancer, they believe device might prove useful for other applications too. “Beyond the clinic, the ARM could potentially be useful as a teaching tool by leveraging reverse image search tools23 that can help trainees quickly search reference resources and answer the question ‘what is this histologic feature that I am looking at?’ More experienced doctors could also leverage the ARM for clinical research to prospectively validate AI algorithms not previously approved for patient care, such as mutational status or microsatellite instability predictions.” ARM is also promising for another reason. It’s cheaper than “conventional whole-slide scanners” by about one or two magnitudes, apparently. We have asked Google for more comment. Source
  2. One chip to rule them all: It natively runs all types of AI software Machine learning and artificial neurons right at home on new chip. Enlarge / Skynet light? The Tianjin-controlled bike stalks one of its creators. Jing Pei et al. We tend to think of AI as a monolithic entity, but it's actually developed along multiple branches. One of the main branches involves performing traditional calculations but feeding the results into another layer that takes input from multiple calculations and weighs them before performing its calculations and forwarding those on. Another branch involves mimicking the behavior of traditional neurons: many small units communicating in burst of activity called spikes, and keeping track of the history of past activity. Each of these, in turn, has different branches based on the structure of its layers and communications networks, types of calculations performed, and so on. Rather than being able to act in a manner we'd recognize as intelligent, many of these are very good at specialized problems, like pattern recognition or playing poker. And processors that are meant to accelerate the performance of the software can typically only improve a subset of them. That last division may have come to an end with the development of Tianjic by a large team of researchers primarily based in China. Tianjic is engineered so that its individual processing units can switch from spiking communications back to binary and perform a large range of calculations, in almost all cases faster and more efficiently than a GPU can. To demonstrate the chip's abilities, the researchers threw together a self-driving bicycle that ran three different AI algorithms on a single chip simultaneously. Divided in two While there are many types of AI software, the key division identified by the researchers is between what can be termed layered calculations and spiking communications. The former (which includes things like convolutional neural networks and deep-learning algorithms) use the layers of calculating units, which feed the results of their calculations into the next layer using standard binary data. Each of these units has to keep track of which other units it communicates with and how much weight to give each of its inputs. On the other side of the divide are approaches inspired more directly by biology. These communicate in analog "spikes" of activity, rather than data. Individual units have to keep track of not only their present state, but their past history. That's because their probability of sending a spike depends on how often they've received spikes in the past. They're also arranged in large networks, but they don't necessarily have a clean layered structure or perform the same sort of detailed computations within any unit. Both of these approaches have benefitted from dedicated hardware, which tends to be at least as good as implementing the software on GPUs and far more energy efficient. (One example of this is IBM's TrueNorth processor.) But the vast difference in communications and calculations between the classes has meant that a processor is only good for one or the other type. That's what the Tianjic team has changed with what it's calling the FCore architecture. FCore is designed so that the two different classes of AI can either be represented by a common underlying compute architecture or easily reconfigured on the fly to handle one or the other. To enable communications among its compute units, FCore uses the native language of traditional neural networks: binary. But FCore is also able to output spikes in a binary format, allowing it to communicate in terms that a neuron-based algorithm can understand. Local memory at each processing unit can be used either for tracking the history of spikes or as a buffer for input and output data. Some of the calculation hardware needed for neural networks is shut down and bypassed when in artificial neuron mode. In the chip With these and a few additional features implemented, each individual compute unit in an FCore can be switched in between the two modes, performing either type of calculation and communication as needed. More critically, a single unit can be set into a sort of hybrid mode. That means taking input from one type of AI algorithm but formatting its output so that it's understood by another—reading spikes and outputting data, or the opposite. That also means any unit on the chip can act as a translator between two types of algorithms, allowing them to communicate with each other when they're run on the same chip. The FCore architecture was also designed to scale. The map of connections among its compute units is held in a bit of memory that's separate from the compute units themselves, and it's made large enough to allow connections to be made external to an individual chip. Thus, a single neural network could potentially be spread across multiple cores in a processor or even multiple processors. In fact, the Tijanic chip is made up of multiple FCores (156 of them) arranged in a 2D mesh. In total, there are about 40,000 individual comput units on the chip, which implies an individual FCore has 256 of them. It's fabricated on a 28 nanometer process, which is more than double the cutting edge process used by desktop and mobile chipmakers. Despite that, it can shift over 600GB/second internally and perform nearly 1.3 Tera-ops per second when run at 300MHz. Despite the low clockspeed, the Tianjic put up some impressive numbers when run against the same algorithms implemented on an NVIDIA Titan-Xp. Performance ranged from 1.6 times to 100 times, depending on the algorithm. And, when energy use was considered, the performance per Watt was almost comical, ranging from 12x all the way up to over 10,000x. Other dedicated AI processors have had strong performance-per-Watt, but they haven't been able to run all the different types of algorithms demonstrated here. Like riding a... well, you know On its own, this would have been an interesting paper. But the research team went beyond by showing that Tianjic's abilities could be put to use even in its experimental form. "To demonstrate the utility of building a brain-like cross-paradigm system," the researchers write, "we designed an unmanned bicycle experiment by deploying multiple specialized networks in parallel within one Tianjic chip." The bike did object detection via a convolutional neural network, a continuous attractor neural network provided target tracking to allow the bike to follow a researcher around. Meanwhile, a spiking neural network allowed the bike to follow voice commands. Something called a multilayer perceptron tracked the bike's balance. And all of these inputs were coordinated by a neural state machine based on a spiking neural network. Somewhere on that bicycle is a Tianjic chip. And it worked. While the bike wasn't self-driving in the sense that it was ready to take someone through the bike lanes of a major city, it was certainly good enough to be a researcher's faithful companion during a walk around a test track that included obstacles. Overall, this is an impressive bit of work. Either the processor alone or the automated bicycle would have made a solid paper on its own. And the idea of getting a single chip to natively host two radically different software architectures was a bold one. But there is one caution worth pointing out, in that the researchers posit this as a route to a general intelligence AI. In a lot of ways, Tianjic does resemble a brain: the brain uses a single architecture (the neuron) to host a variety of different processes that, collectively, make sense of the world and plan actions that respond to it. To an extent, the researchers are right that being able to run and integrate multiple algorithms at once is a path towards something like that. But this is still not necessarily a route to general intelligence. In our brain, specialized regions—the algorithm equivalent—can perform a collection of poorly defined and only vaguely related activities. And a single task (like deciding where to focus our attention) takes myriad inputs. They range from our recent history to our emotional state to what we're holding in temporary memory to biases built up through millions of years of evolution. So just being able to run multiple algorithms is still a long way off from anything we'd recognize as intelligence. Nature, 2019. DOI: 10.1038/s41586-019-1424-8 (About DOIs). Source: One chip to rule them all: It natively runs all types of AI software (Ars Technica)
  3. Artificial intelligence progress gets gummed up in silos and cultural issues Survey finds things slow-going for AI and robotic process automation, Silos have always been considered a bad thing for enterprise IT environments, and today's push for artificial intelligence and other cognitive technologies is no exception. A recent survey shows fewer than 50% of enterprises have deployed any of the "intelligent automation technologies" -- such as artificial intelligence (AI) and robotic process automation (RPA). IT leaders participating in the survey say data and applications within their companies are too siloed to make it work. Photo: Joe McKendrick That's the gist of a survey of 500 IT executives, conducted by IDG in partnership with Appian. The majority of executives, 86%, say they seek to achieve high levels of integration between human work, AI, and RPA over the coming year. The problem is they have a long way to go -- at this time, only 12% said their companies do this really well today. Where are the problems? Two-thirds of executives, 66%, stated that they "have difficulty integrating existing IT investments and skills with demanding AI and RPA technology." Notably, 43% cite changing the IT culture as an obstacle to AI and RPA. While the survey report's authors did not spell out what kind of changes were required, it can be assumed that IT culture is hampered by a need for a constant maintenance and firefighting, versus focusing more on innovation. There may be also issues with communication between the IT and business sides of the house -- as well as interacting more with data science types. Some of these issues may eventually see some relief through agile and DevOps initiatives. Additional issues that hold back AI and RPA progress include concerns about security, cited by 41%, and application development issues seen by 34% of the group. Again, this was not elaborated in the report, but application development roadblocks likely stem from lack of proper tools to build AI-driven applications, along with the need for skills development or refreshes. In addition, linking automation efforts to improving customer experiences was problematic. Two-thirds of executives, 66%, say they Needed a Better Multi-Channel Buying Experience. However, 26% lack the systems to deliver integrated multi-channel customer experiences, and 22% need to build or buy software to implement multi-channel customer experiences. Another 21% say they even lack a strategy for delivering integrated multi-channel customer experiences. At this point, it appears AI and RPA are mainly the tools of the largest corporations with humongous IT staffs. While there are deployments of individual emerging automation technologies, a lack of strategy and clear alignment to business goals is resulting in siloed deployments and overwhelmed internal application development teams. Less than half of surveyed companies have deployed any form of intelligent automation. Fully half of those companies boast IT staffs in excess of 20,000 employees. Source
  4. ThisPersonDoesNotExist.com uses AI to generate endless fake faces Hit refresh to lock eyes with another imaginary stranger A few sample faces — all completely fake — created by ThisPersonDoesNotExist.com The ability of AI to generate fake visuals is not yet mainstream knowledge, but a new website — ThisPersonDoesNotExist.com— offers a quick and persuasive education. The site is the creation of Philip Wang, a software engineer at Uber, and uses research released last year by chip designer Nvidia to create an endless stream of fake portraits. The algorithm behind it is trained on a huge dataset of real images, then uses a type of neural network known as a generative adversarial network (or GAN) to fabricate new examples. “Each time you refresh the site, the network will generate a new facial image from scratch,” wrote Wang in a Facebook post. He added in a statement to Motherboard: “Most people do not understand how good AIs will be at synthesizing images in the future.” The underlying AI framework powering the site was originally invented by a researcher named Ian Goodfellow. Nvidia’s take on the algorithm, named StyleGAN, was made open source recently and has proven to be incredibly flexible. Although this version of the model is trained to generate human faces, it can, in theory, mimic any source. Researchers are already experimenting with other targets. including anime characters, fonts, and graffiti. As we’ve discussed before at The Verge, the power of algorithms like StyleGAN raise a lot of questions. On the one hand there are obvious creative applications for this technology. Programs like this could create endless virtual worlds, as well as help designers and illustrators. They’re already leading to new types of artwork. Then there are the downsides. As we’ve seen in discussions about deepfakes (which use GANs to paste people’s faces onto target videos, often in order to create non-consensual pornography), the ability to manipulate and generate realistic imagery at scale is going to have a huge effect on how modern societies think about evidence and trust. Such software could also be extremely useful for creating political propaganda and influence campaigns. In other words, ThisPersonDoesNotExist.com is just the polite introduction to this new technology. The rude awakening comes later. Source
  5. Google and Microsoft warn investors that bad AI could harm their brand As AI becomes more common, companies’ exposure to algorithmic blowback increases Illustration by Alex Castro / The Verge For companies like Google and Microsoft, artificial intelligence is a huge part of their future, offering ways to enhance existing products and create whole new revenue streams. But, as revealed by recent financial filings, both firms also acknowledge that AI — particularly biased AI that makes bad decisions — could potentially harm their brands and businesses. These disclosures, spotted by Wired, were made in the companies’ 10-K forms. These are standardized documents that firms are legally required to file every year, giving investors a broad overview of their business and recent finances. In the segment titled “risk factors,” both Microsoft and Alphabet, Google’s parent company, brought up AI for the first time. From Alphabet’s 10-K, filed last week: These disclosures are not, on the whole, hugely surprising. The idea of the “risk factors” segment is to keep investors informed, but also mitigate future lawsuits that might accuse management of hiding potential problems. Because of this they tend to be extremely broad in their remit, covering even the most obvious ways a business could go wrong. This might include problems like “someone made a better product than us and now we don’t have any customers,” and “we spent all our money so now don’t have any”. But, as Wired’s Tom Simonite points out, it is a little odd that these companies are only noting AI as a potential factor now. After all, both have been developing AI products for years, from Google’s self-driving car initiative, which began in 2009, to Microsoft’s long dalliance with conversational platforms like Cortana. This technology provides ample opportunities for brand damage, and, in some cases, already has. Remember when Microsoft’s Tay chatbot went live on Twitter and started spouting racist nonsense in less than a day? Years later, it’s a still regularly cited as an example of AI gone wrong. However, you could also argue that public awareness of artificial intelligence and its potential adverse affects has grown hugely over the past year. Scandals like Google’s secret work with the Pentagon under Project Maven, Amazon’s biased facial recognition software, and Facebook’s algorithmic incompetence with the Cambridge Analytica scandal have all brought the problems of badly-implemented AI into the spotlight. (Interestingly, despite similar exposure, neither Amazon nor Facebook mention AI risk in their latest 10-Ks.) And Microsoft and Google are doing more than many companies to keep abreast of this danger. Microsoft, for example, is arguing that facial recognition software needs to be regulated to guard against potential harms, while Google has started the slow business of engaging with policy makers and academics about AI governance. Giving investors a head’s up too only seems fair. Source
  6. Technology: the latest trends affecting travel Travel Technology Europe returned to Olympia London for two days of innovation, networking and news last month. Andrew Doherty reports on the latest tech trends captivating travel. Artificial intelligence, New Distribution Capability (NDC) and digital transformation were among the hot topics discussed at Travel Technology Europe (TTE), which celebrated its 16th anniversary in February. More than 100 technology brands and start-ups attended the exhibition to showcase and sell their wares to prospective buyers, with 120 industry professionals invited to speak at a series of talks and seminars that ran in tandem with the main show. However, it was at the C-suite Question Time panel that leading marketers, chief information and technology officers convened to talk about how technology was influencing their businesses and travel more generally. Panellists comprised Emil Majkowski, chief information officer and architect at Rentals United; Simon Hamblin, chief technology officer at dnata Travel; Clare de Bono, head of product and innovation at Amadeus; Phil Scully, chief information officer at Costa Coffee; and Suzie Thompson, vice-president of marketing, distribution and revenue management at Red Carnation Hotels. Moderating the discussion was Charlotte Lamp Davies, principal consultant at A Bright Approach, who kicked off proceedings by asking speakers how they were embracing technology in 2019. De Bono said Amadeus would continue to work on NDC – an XML-based communication standard between airlines and travel agents, which offers access to a broader airline inventory from participating carriers, including ancillary products. “For our agent partners, this means bringing NDC, low-cost carriers and new application programming interfaces (APIs) to a centralised platform.” From a customer-facing position, Thompson explained Red Carnation Hotels would be future-proofing its marketing strategies. “We want to have a data cleanup and use artificial intelligence to obtain healthier insights to use in our marketing efforts,” she said. Meanwhile, at dnata Travel, Hamblin said scalability would take precedence in its tech strategy. He said: “Apart from bringing all of our legacy systems together, we want to work on our Yalago bedbank brand. The engine is currently handling more than 10 million requests an hour. We want it to cope with 100 million.” Driving innovation The session also focused on practical steps travel companies can take to attract tech talent. De Bono said businesses must empower IT teams by offering access to the latest technologies on offer. “We [Amadeus] have moved off mainframes; 600 million lines of code were rewritten in 2017. “We’re also leveraging Amazon Web Services and Google Cloud Platform, which is really helping us recruit. Not only do teams want to work on things that make a difference, but they also want to work with new tech.” Majkowski said companies that have a dedicated tech strategy would help retain talent. “When teams feel like their ideas can be implemented, they will stay. They don’t want to jump between projects.” In order to successfully drive digital transformation, Hamblin said travel companies must encourage innovation from the top down. “It’s so important that everyone understands what’s happening, why the digital transformation is taking place and how it will impact them. Leaving legacy systems behind is difficult – the only way to get through it is by breaking them down one by one and demonstrating how the new programmes can deliver.” Red Carnation’s Thompson explained companies must first assess the value of digital transformation before assigning a budget to facilitate it. “If digital transformation – investing in artificial intelligence or virtual reality, for example – doesn’t benefit the customer, then we as a business won’t do it,” she said. “It must enhance their experience.” De Bono said Amadeus was developing AI technology to help travel agents suggest relevant holiday options. Alita (Amadeus Linguistics Intelligent Travel Assistant) observes customer behaviour to predict, recommend and personalise trips using natural language processing and machine learning. Because Alita is still a prototype, there is currently no time-frame as to when it will be rolled out. Looking to the future, Majkowski predicted automated payment systems would become more common in travel businesses. “I think we will see more revenue management programmes using machine learning to facilitate data distribution, while Blockchain will help with managing identity.” Scully added: “One-tap payments will go a long way to facilitating a frictionless customer experience too. Unless that happens, we will continue to queue and still get frustrated.” Source
  7. SAS to invest $1 billion in AI for industry uses, education, R&D The AI investment is part of SAS' efforts to make data, AI, machine learning and algorithms more return driven and consumable. SAS said it will invest $1 billion in artificial intelligence over the next three years as it develops its analytics platform, educates data scientists and targets industry-specific use cases. The investment is part of SAS' effort to build a higher profile. SAS is an analytics and data science pioneer, but the privately-held company has been quietly retooling its business and products. SAS' artificial intelligence investment will focus on AI research and development, education initiatives such as its certifications and data science academy and services to create better returns on projects. Oliver Schabenberger, chief technology and operating officer at SAS, will give a keynote at the Nvidia GPU Technology Conference on Tuesday. SAS' AI efforts will revolve around embedding AI into its platform and creating tools for data management, customer analytics, fraud and security and risk management. The company will also aim to meld AI and the Internet of things data for industries ranging from financial services to manufacturing to healthcare. We caught up with Schabenberger to talk about SAS's strategy in December. Here were the key points on the company's strategy and where AI fits in: SAS in recent years "hasn't been as visible as it could have been," said Schabenberger. But the company has been making pivots to software as a service, connecting its platform to other analytics tools and targeting industries better. SAS has been focused on "how our offering can bring analytics to areas undiscovered," he added SAS has also been focused on targeting a wide range of companies beyond large enterprises and making its offering more consumable. The company is entering a results as a service model where customers come with business problems and SAS can help solve them with its expertise in analytics, machine learning and data science. Most of the company's customers are on-premises, but migrating to cloud workloads at different paces, said Schabenberger. As that migration continues, SAS is focused on bridging the gap between those consuming the data and those preparing it and programming. "Data needs to be consumed," said Schabenberger. "Our offering today is more than what SAS became known for." SAS is cloud agnostic and will plug into Google Cloud Platform, AWS and Microsoft Azure. The value from SAS comes from expertise in developing machine learning models and algorithms. "Our strength is embedding algorithms and bringing them into action to process data into a user model and solution," said Schabenberger. Source
  8. The future of graphics is unquestionably AI image generation: Jensen Huang Nvidia CEO tells ZDNet that a new take on Quake II points the way to the future. For old-school gamers who fondly remember rocket jumps, there was a point during the GTC keynote address by Nvidia CEO Jensen Huang that would warm the cockles of the heart: An updated version of Quake II to make use of the ray tracing capabilities of the company's RTX GPUs. The key to this demonstration was denoising, Huang told ZDNet on Tuesday. "Denoising is basically using AI to infer the missing pixels," he said. "If I disable the denoising for you yesterday, what it would look like is a black, dark spotty, kind of fuzzy image, but when we turn on denoising, it looks like what you saw yesterday. Incredible." On Monday, members of the research arm of Nvidia showed off how they can use a neural network to create a high-quality image from a rough sketch, and besides a few boundary issues with different elements, it looked rather realistic. "It's like a colouring book picture that describes where a tree is, where the sun is, where the sky is," Nvidia vice president of applied deep learning research Bryan Catanzaro said at the time. "And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows, and colours based on what it has learned about real images." The challenge with this technology, according to Huang, will be how to speed it up to real-time. "It's very hard -- and the reason for that is because you don't have very much time," he told ZDNet. "You only have about roughly two or three milliseconds to do it -- not one or two seconds." Responsibility for this increase falls to the Tensor cores used in Nvidia's Turing architecture. "The Tensor core processor is so fast, and our goal is to now use that Tensor core, to teach it, how to guess, otherwise infer, the missing pixels," Huang said. "We are doing a lot of work in AI-inferred image-generation. It is unquestionably the future." Quake II in its original colour and lighting scheme (Image: Nvidia) Earlier on Tuesday, Google announced its Stadia cloud gaming service, a space Nvidia has been playing in for some time with its GeForce-as-a-service offer, GeForce Now. The product is currently in a closed beta with 300,000 users, 15 datacentres, and 1 million further people on a waiting list. Although he would not be drawn on direct comparisons between the services, Huang told journalists that there are basic laws of physics to deal with. "The latency, the speed of light matters in the round-trip time of video games," he said. "The round trip time of cloud gaming -- we're the best, GeForce Now is the best in the world -- round trip time is 70ms. For esports, you would have been shot 10 times, so it's not perfect for esports. Now there are many games, and for some people their framerate is only 30fps .... so for some games, it's fine." The Nvidia CEO said GeForce Now is actively avoiding trying to become a Netflix of a gaming, due to large gaming franchises and publishers straddling the industry. "We believe that the video game industry is largely occupied, the best platforms are largely occupied by, call it, five or 10 games -- and these five to 10 games are powerful franchises, they're not going to put it on your store," he said. "And so our strategy is to leave the economics completely to the publishers, to not get in the way of their relationship with the gamer. Because Blizzard, World of Warcraft, has their gamers, and PUBG has their gamers, League of Legends has their gamers, and the publishers are very, very strong, and they would like to have a great relationship with their gamers, and we would like not to get in the way of that." Rather, GeForce Now is a way for Nvidia to move beyond its current 200 million userbase, and chase the "billion PC customers, who don't have GeForces" because the economics does not stack up, or their current hardware is limited in some way from high-powered gaming. With latency being key, Nvidia is looking to get its servers across the globe. In particular, to partner with telcos under the GeForce Now Alliance banner, like SoftBank and LG Uplus which on Monday announced that they would deploy ray tracing capable RTX Serversin Japan and Korea. Huang added that GeForce Now is expected to be upgraded to RTX hardware in either the third or fourth quarter of the year. Source
  9. Artificial intelligence and the future of smartphone photography 3-D sensors coming to this year's smartphones are just the tip of a wave of machine learning-driven photography that will both correct shortcomings of smartphone pictures and also provide some stunning new aspects of photography. Photography has been transformed in the age of the smartphone. Not only is the pose different, as in the case of the selfie, but the entire nature of the process of light being captured by phone cameras is something else altogether. Cameras are no longer just a lense and a sensor, they are also the collection of algorithms that instantly manipulate images to achieve photographic results that would otherwise require hours of manipulation via desktop software. Photography has become computational photography. Continued advances in machine learning forms of artificial intelligence will bring still more capabilities that will make today's smartphone pictures look passé. Recent examples of the state of the art on phones are Alphabet's Google's Pixel 3 smartphone pictures, and Apple's iPhone X photos. In the former case, Google has used machine learning to capture more detail under low-light conditions, so that night scenes look like daylight. These are simply not shots that ever existed in nature. They are super-resolution pictures. And Apple, starting with iPhone X in 2017, added "bokeh," the artful blurring of elements outside of the focal point. This was not achieved via aspects of the lens itself, as is the case in traditional photography, but rather by a computational adjustment of the pixels after the image is captured. It's quite possible 2019 and 2020's breakthrough development will be manipulating the perspective of an image to improve it. Hopefully, that will lead to a correction of the distortions inherent in smartphone photography that make them come up short next to digital single-lens-reflex (DSLR) camera pictures. How a convolutional neural network, or CNN, attempts to reconstruct reality from a picture. From "Understanding the Limitations of CNN-based Absolute Camera Pose Regression," by Torsten Sattler of Chalmers University of Technology, Qunjie Zhou and Laura Leal-Taixe of TU Munich, and Marc Pollefeys of ETH Zürich and Microsoft. Sattler et al. They could, in fact, achieve results akin to what are known as "tilt-shift" cameras. In a tilt-shift camera, the lens is angled to make up for the angle at which a person is standing with the camera, and thereby correct the distortions that would be created in the image because of the angle between the individual and the scene. Tilt-shift capabilities can be had by DSLR owners in a variety of removable lenses from various vendors. The average phone camera has a lens barrel so tiny that everything it captures is distorted. Nothing is ever quite the right shape as it is in the real world. Most people may not notice or care, as they've become used to selfies on Instagram. But it would be nice if these aberrations could be ameliorated. And if they can, it would be a selling point for the next round of smartphones from Google, Apple, etc. Increasingly, the iPhone and other cameras will carry rear cameras with 3-D sensors. These sensors, made by the likes of Lumentum Holdings and other chip vendors, measure the depth of the surroundings of the phone by sending out beams of light and counting how they return to the phone after bouncing off objects. Techniques such as "time-of-flight" allow the phone to measure in detail the three-dimensional structure of the surrounding environment. Those sensors can take advantage of a vast body of statistical work that has been done in recent years to understand the relationship between 2-D images and the real world. Google's "Night Sight" feature on its Pixel 3 smartphones: scenes that never existed in nature. Google. A whole lot of work has been done with statistics to achieve the kinds of physics that go into tilt-shift lenses, both with and without special camera gear. For example, a technique called "RANSAC," or "random sample consensus," goes back to 1981 and is specifically designed to find landmarks in the 3-D world that can be mapped to points in a 2-D image plane, to know how the 2-D image correlates to three-dimensional reality. Using that technique, it's possible to gain a greater understanding about how a two-dimensional representation corresponds to the real-world. A team of researchers at the University of Florence in 2015 built on RANSAC to infer the setup of a pan-tilt-zoom camera by reasoning backward from pictures it took. They were able to tune the actuators, the motors that control the camera, to a fine degree by using software to analyze how much distortion is introduced into pictures with different placements of the camera. And they were able to do it for video, not just still images. From that time, there's been a steady stream of work to estimate objects in pictures, referred to as pose estimation, and a related task, simultaneous localization and mapping, or SLAM, which constructs in software a "cloud" of points in a 3-D scene that can be used to understand how much distortion is in a digital photo. Researchers at the University of Erlangen-Nürnberg in Germany and the Woods Hole Oceanographic Institution in 2017 showed off a Python library, called CameraTransform, which lets one reckon the real dimensions of an object in the world by working backward from the image taken. Seeing around corners: a neural network created by researchers to infer objects occluded in a picture, consisting of an encoder-decoder combined with a generative adversarial network. Courtesy of Helisa Dhamo, Keisuke Tateno, Iro Laina, Nassir Navab, and Federico Tombari of the Technical University of Munich, with support from Canon, Inc. Dhamo et al. Last year, researchers at the Technical University of Munich, Germany and Canon, Inc. showed it's possible to take a single image and infer what's in the scene that's occluded by another object. Called a "layered depth image," it can create new scenes by removing an object from a photo, revealing the background that the camera never saw, but that was computed from the image. The approach uses the familiar encoder-decoder approach found in many neural network applications, to estimate the depth of a scene, and a "generative adversarial network," or GAN, to construct the parts of the scene that were never actually in view when the picture was taken. All that research is bubbling up and is going to culminate in some fantastic abilities for the next crop of smartphone cameras, equipped with 3-D sensors. The results of this line of research should be stunning. At the very least, one can imagine portraits taken on smartphones that no longer have strange distortions of people's faces. Super-resolution pictures of architecture will be possible that create parallel lines by evening out all the distortions in the lens. The smartphone industry will be able to claim another victory over the DSLR market as phones churn out pictures with stunning levels of accuracy and realism. But, of course, the long-term trend for smartphone photography is away from realism, toward more striking effects that were not possible before computational photography. And so we may see uses of 3-D sensing that tend toward the surreal. For example, tilt-shift cameras can be used to create some strangely beautiful effects, such as narrowing the depth of field of the shot to an extreme degree. That has the effect of making landscapes look as if they're toy models, in an oddly satisfying way. There are apps for phones that will do something similar, but the effect of having 3-D sensors coupled to AI techniques will go well beyond what those apps achieve. There are techniques for achieving tilt-shift in Photoshop, but it will be much more satisfying to have the same effects come right out of the camera with each press of the shutter button. Down the road, there'll be another stage that will mean a lot in terms of advancing machine learning techniques. It's possible to forego the use of 3-D sensors and just use a convolutional neural network, or CNN, to infer the coordinates in space of objects. That would save on the expense of building the sensors into phones. However, currently, such software-only approaches produce poor results, as discussed in a report out this week by researchers at Microsoft and academic collaborators. Known as "absolute pose regression," the software-only approach failed to generalize, they write, after training, meaning that whatever techniques the CNN acquired didn't correctly estimate geometry when tested with novel images. The authors consider their work "an important sanity check" for software-only efforts, and they conclude that "there is still a significant amount of research to be done before pose regression approaches become practically relevant." How will that work get done? Not by researchers alone. It will be done by lots of smartphone owners. With the newest models, containing 3-D sensors, they will snap away their impressive 3-D sensing-enhanced pictures. While they do so, their device, or the cloud, will be keeping track of how real-world geometry correlates to 2-D images. It will be using all that activity, in other words, to keep learning. Some day, with enough 3-D shots, the CNN, or whatever algorithm is used, will be smart enough to look at the world and know exactly what it's like even without help from 3-D depth perception. Source
  10. Artificial Intelligence in Cyber Security – Cyber Attacks and Defence Approach Artificial Intelligence in Cyber Security is always evolving, say from the old school days when AV was thought to be an effective defender to the present days where AV, SIEM, IPS, endpoint securityand protection play a crucial part in building up defenses. But with development in technologies, the existing cybersecurity controls won’t be sufficient to defend against the unforeseen and limitless cyber threats. Cyber professionals need an upper hand towards cybersecurity, and this can be offered by artificial intelligence(AI) we also call it Artificial Intelligence in Cyber Security. The implementation of Artificial Intelligence in Cyber Security has just begun, and we have a long way to go to understand and exploit its full potential. In this article, we will discuss AI, its types, examples, and applications in detail. What is Artifiicial Intelligence? A simulation of human intelligence by computers is called as Artifical Intelligence. But the simulation includes, learning, understanding, logical reasoning, and improvisation. AI can be either strong or weak based on the purpose it was created. Any AI that was created to perform only specific set of tasks is called as Artifical Narrow Intelligence. AI that is capable of self-correcting and making decisions exactly like a human is called Artificial General Intelligence (which is still a myth). Real-time examples of artificial narrow intelligence include SIRI, Okay Google, Alexa, IBM Watson, and Microsoft Cognitive Services. Different types of Artificial Intelligence in Cyber Security Reactive and limited purpose devices IBM’s Deep Blue is a chess program that was created to defeat its opponent by formulating the pieces and move combinations using permutation and combinations to deliver the best move for a given position. Same goes for Google’s AlphaGo as well. Thoery of Mind This type of AI observes user behavior based on their culture, belief, desires and more. This type of AI has not yet been developed. Memory limitations This AI won’t be able to remember everything that did occur in the past after a limited time. Self-driving cars are based on this type of AI. Self awareness AI This AI will be able to think exactly like men by analysing the situations and deciding things just like humans. This type of AI is yet to be developed. Examples of AI The common examples of AI include Automation, Machine Learning, Robotics, Self-driving cars, and Virtual AI. Robotics have incorporated AI to deliver the goods or services as promised without the blunders humans can do when the load is high and large output is in demand. Automation is again something similar to robotics, but with very less influence of AI. Self-driving cars with AI can ease things of the road, with better passenger safety if and only the AI is tested for accuracy and quality multiple times. Machine learning along with deep learning brings in the fully functional and operational artificial narrow/general intelligence. Deep learning is more like neurons of the human body, bringing the actual reflexes and emotions up for the AI. Apart from the above, Virtual Ai assistant is mostly used by businesses to bring in an extra layer of security and operational excellence to the businesses. AI Across Verticals The influence of AI is now spreading across verticals starting from healthcare, manufacturing, education, finance, law, astronomy and more. In healthcare AI assistance is preferred for fetching data on patients, their medical history etc. The ability to convert natural language into a programmable language helps healthcare to take care of outpatients and their daily routines. As we already spoke about manufacturing, AI helps in automation and other robotic operations. In the case of education, AI facilitates grading systems, and also bring doorstep education through AI-powered education platforms accessible via smartphones, tabs or laptops. Similarly it helps with law, by fetching the details of taxonomy and ontology related to it. Will AI be a boon or bane for mankind? Bane AI can cause a lot of troubles to man kind, if properly executed Artificial General Intelligence can overtake humans and can be out of control, taking non-rational decisions of its own. AI can also reduce jobs for humans, based on its speed, quality, and reliability leading to unemployment. Boon Artificial Intelligence in Cyber Security will make things simple and easy for humans, by requesting only very little efforts compared to the former times. Artificial Intelligence in Cyber Security will improve over time and will deliver improvised outputs through continuous learning procedures. Artificial Intelligence in Cyber Security will understand the malware and cyberattack over time, hence developing improved intelligence and increased defenses against cyber attacks. Narrow AI, General AI and Super AI Artificial Narrow Intelligence is little less compared to humans, with their expertise in only a narrow field of operation, for example: AlphaGo, AlphaGo Zero, SIRI, Deep Blue etc. Artifical General Intelligence will be an equivalent of human intelligence, with simple and easy ways of completing the task by analyzing and making decisions upon it. While Aritificial Super Intelligence will surpass human intelligence with increased reliability, credibility, knowledge, problems solving skills and wisdom. This is where researchers believes human mankind could go extinct. How will AI improvise cybersecurity? With AI in place researchers will be able to identify and solve problems little earlier, and bring more proactive controls based on the nature of the threat. AI can modify the security protocols based on the development of malware and the information available for it, so it would be able to self detect or prevent foreseen threats with variations. Artificial Intelligence in Cyber Security can also be used to alert the security professional for any anonymous behaviors in the network, thus giving an upper hand for cybersecurity pros and act in time to avoid becoming victim to any ransomware or remote code executions. AI can also be used to define privileges to users automatically based on their role inside the organization thus avoiding the privileged escalation flaws. It can also automate updates, decline poor patches, deploy software, and monitor system health based on the application and OS updated. Cognitive computing along with Artificial Intelligence in Cyber Security can bring in enhanced security protocols and services against hacking methodologies. Artificial Intelligence can also go in favor of hackers to break into organizations, they can communicate with command and control server more wisely to break into networks and computers within very less time. Any technological development comes with its own set of cons, but it’s the nature of humans to evolve and artificial intelligence will play a crucial part in overall and cybersecurity will also get benefited because of it. Source
  11. Can AI help crack the code of fusion power? ‘It’s sort of this beautiful synergy between the human and the machine’ Part ofThe Real-World AI Issue With the click of a mouse and a loud bang, I blasted jets of super-hot, ionized gas called plasma into one another at hundreds of miles per second. I was sitting in the control room of a fusion energy startup called TAE Technologies, and I’d just fired its $150 million plasma collider. That shot was a tiny part of the company’s long pursuit of a notoriously elusive power source. I was at the company’s headquarters to talk to them about the latest phase of their hunt that involves an algorithm called the Optometrist. Nuclear fusion is the reaction that’s behind the Sun’s energetic glow. Here on Earth, the quixotic, expensive quest for controlled fusion reactions gets a lot of hype and a lot of hate. (To be clear, this isn’t the same process that happens in a hydrogen bomb. That’s an uncontrolled fusion reaction.) The dream is that fusion power would mean plenty of energy with no carbon emissions or risk of a nuclear meltdown. But scientists have been pursuing fusion power for decades, and they are nowhere near adding it to the grid. Last year, a panel of advisers to the US Department of Energy published a list of game-changers that could “dramatically increase the rate of progress towards a fusion power plant.” The list included advanced algorithms, like artificial intelligence and machine learning. It’s a strategy that TAE Technologies is banking on: the 20-year-old startup began collaborating with Google a few years ago to develop machine learning tools that it hopes will finally bring fusion within reach. Norman, TAE’s $150 million plasma collider. Photo by Brennan King and Weston Reel Attempts at fusion involve smacking lightweight particles into one another at temperatures high enough that they fuse together, producing a new element and releasing energy. Some experimentscontrol a super-hot ionized gas called plasma with magnetic fields inside a massive metal doughnut called a tokamak. Lawrence Livermore National Laboratory fires the world’s largest laser at a tiny gold container with an even tinier pellet of nuclear fuel inside. TAE twirls plasma inside a linear machine named Norman, tweaking thousands of variables with each shot. It’s impossible for a person to keep all of those variables in their head or to change them one at a time. That’s why TAE is collaborating with Google, using a system called the Optometrist algorithm that helps the team home in on the ideal conditions for fusion. We weren’t sure what to make of all the hype surrounding AI or machine learning or even fusion energy for that matter. So the Verge Science video team headed to TAE’s headquarters in Foothill Ranch, California, to see how far along it is, and where — if anywhere — AI entered the picture. You can watch what we found in the video above. Ultimately, we found a lot of challenges but a lot of persistent optimism, too. “The end goal is to have power plants that are burning clean fuels that are abundant, [and] last for as long as humanity could last,” says Erik Trask, a lead scientist at TAE. “Now, we think that we have found a way to do it, but we have to show it. That’s the hard part.” Source
  12. Could This Technology Make Amazon Go Stores Obsolete? A start-up company is promising to offer similar artificial intelligence tools that are scalable for larger retailers. (AMZN) is revolutionizing how we shop with its cashierless Amazon Go convenience stores, and hopes to expand the concept from fewer than 10 now into a network of 3,000 stores in only two years. There may be logistical issues in building out so many stores in such a short time, as well as political ones if states decide all stores must accept cash. But the biggest challenge may be competing technology that allows more retailers to offer their own no-cashier, no-checkout shopping. It could make the Amazon Go chain obsolete. IMAGE SOURCE: AMAZON.COM. The dawn of smart carts Amazon Go's "just walk out" shopping experience requires a store to be outfitted with machine vision, deep-learning algorithms, and an array of cameras and sensors to watch a customer's every move. It knows what every item is, and when it's been picked up and put back, so it can charge a shopper's account. A start-up called Caper is offering similar technology that is more accessible to more retailers. Rather than outfit an entire store with such advanced artificial intelligence (AI), Caper puts it into individual shopping carts. It lets supermarkets more easily compete against Amazon without the massive cost necessary to build new stores or retrofit a chain's existing infrastructure. Shoppers put items into AI-powered carts, which identify the products and ring up the total. Interactive screens on the carts not only keep a running tally of the order, but can also direct shoppers to in-store specials. The technology used by Caper's partners does currently require customers to scan each item into the shopping cart screens, but they're using it to train the deep learning algorithm to enable shopping without scanning. When they are finished shopping, customers can pay via the screen and leave. While bagging up a week's worth of groceries could slow you down, this system seems likely to encourage shoppers to bring their own bags into the store and fill them up as they go. That's often how the scan-and-go technologies work so you're not stuck at the register still having to bag all your items. Amazon Go's limitations The original Amazon Go location was only 1,800 square feet, about the size of a convenience store, but it was estimated to cost at least $1 million to install all the cameras and sensors. Analysts estimate that to build out 3,000 stores would cost some $3 billion. While Amazon is testing a range of store sizes, some as large as 2,400 square feet, it's clear the concept is prohibitively expensive to apply to something on the order of a full-sized supermarket and is why they're mostly stocked with convenience items that you can literally grab-and-go. A Walmart store can run anywhere from 30,000 square feet to over 200,000 for one of its supercenters. Even Amazon's Whole Foods Markets average 40,000 square feet. Retrofitting one of these locations would also seem to be nearly impossible since all-new shelving and displays would be needed to incorporate the cameras and sensors. Many grocery chains like Kroger have added so-called scan-and-go capabilities, where customers use a scanning gun or their smartphones to scan in each item they buy, and then upload the data to a register at checkout. But it's a clunkier system than Amazon's effortless grab-and-go technology; Walmart killed off its own scan-and-go system because it said shoppers weren't using it. Scaling up artificial intelligence Caper says its cart technology is currently in place at two grocery store chains, with plans to roll it out to 150 more this year. The company's website lists six retail partners from the New York area including C-Town, Key Food, and Pioneer supermarkets. It says its shopping carts have increased the value of the average basket by 18% because customers are exposed to products they might otherwise miss or can't find. Amazon Go is certainly poised to upend the convenience store industry with its advanced AI wizardry. But Caper could revolutionize the much broader and larger supermarket sector and make Amazon's stores obsolete by bringing the same sort of technology to your neighborhood grocery store in a package that's far more scalable. Source
  13. Habana, the AI chip innovator, promises top performance and efficiency Habana is the best kept secret in AI chips. Designed from the ground up for machine learning workloads, it promises superior performance combined with power efficiency to revolutionize everything from data centers in the cloud to autonomous cars. As data generation and accumulation accelerates, we've reached a tipping point where using machine learning just works. Using machine learning to train models that find patterns in data and make predictions based on those is applied to pretty much everything today. But data and models are just one part of the story. Also: The AI chip unicorn that's about to revolutionize everything Another part, equally important, is compute. Machine learning consists of two phases: Training and inference. In the training phase patterns are extracted, and machine learning models that capture them are created. In the inference phase, trained models are deployed and fed with new data in order to generate results. Both of these phases require compute power. Not just any compute in fact, as it turns out CPUs are not really geared towards the specialized type of computation required for machine learning workloads. GPUs are currently the weapon of choice when it comes to machine learning workloads, but that may be about to change. AI CHIPS JUST GOT MORE INTERESTING GPU vendor Nvidia has reinvented itself as an AI chip company, coming up with new processors geared specifically towards machine learning workloads and dominating this market. But the boom in machine learning workloads has whetted the appetite of others players, as well. Also: AI chips for big data and machine learning Cloud vendors such as Google and AWS are working on their own AI chips. Intel is working on getting FPGA chips in shape to support machine learning. And upstarts are having a go at entering this market as well. GraphCore is the most high profile among them, with recent funding having catapulted it into unicorn territory, but it's not the only one: Enter Habana. Habana has been working on its own processor for AI since 2015. But as Eitan Medina, its CBO told us in a recent discussion, it has been doing so in stealth until recently: "Our motto is AI performance, not stories. We have been working under cover until September 2018". David Dahan, Habana CEO, said that "among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor." As Medina explained, Habana was founded by CEO David Dahan and VP R&D Ran Halutz. Both Dahan and Halutz are semi-conductor industry veterans, and they have worked together for years in semiconductor companies CEVA and PrimeSense. The management team also includes CTO Shlomo Raikin, former Intel project architect. Medina himself also has an engineering background: "Our team has a deep background in machine learning. If you Google topics such as quantization, you will find our names," Medina said. And there's no lack of funding or staff either. Habana just closed a Round B financing round of $75 million, led by Intel Capital no less, which brings its total funding to $120 million. Habana has a headcount of 120 and is based in Tel Aviv, Israel, but also has offices and R&D in San Jose, US, Gdansk, Poland, and Beijing, China. This looks solid. All these people, funds, and know-how have been set in motion by identifying the opportunity. Much like GraphCore, Habana's Medina thinks that the AI chip race is far from over, and that GPUs may be dominating for the time being, but that's about to change. Habana brings two key innovations to the table: Specialized processors for training and inference, and power efficiency. SEPARATING TRAINING AND INFERENCE TO DELIVER SUPERIOR PERFORMANCE Medina noted that starting with a clean sheet to design their processor, one of the key decisions made early on was to address training and inference separately. As these workloads have different needs, Medina said that treating them separately has enabled them to optimize performance for each setting: "For years, GPU vendors have offered new versions of GPUs. Now Nvidia seems to have realized they need to differentiate. We got this from the start." Also: AI Startup Gyrfalcon spins plethora of chips for machine learning Habana offers two different processors: Goya, addressing inference; and Gaudi, addressing training. Medina said that Goya is used in production today, while Gaudi will be released in Q2 2019. We wondered what was the reason inference was addressed first. Was it because the architecture and requirements for inference are simpler? Medina said that it was a strategic decision based on market signals. Medina noted that the lion's share of inference workloads in the cloud still runs on CPUs. Therefore, he explained, Habana's primary goal at this stage is to address these workloads as a drop-in replacement. Indeed, according to Medina, Habana's clients at this point are to a large extent data center owners and cloud providers, as well as autonomous cars ventures. The value proposition in both cases is primarily performance. According to benchmarks published by Habana, Goya is significantly faster than both Intel's CPUs and Nvidia's GPUs. Habana used the well-known RES-50 benchmark, and Medina explained the rationale was that RES-50 is the easiest to measure and compare, as it has less variables. Medina said other architectures must make compromises: "Even when asked to give up latency, throughput is below where we are. With GPUs / CPUs, if you want better performance, you need to group data input in big groups of batches to feed the processor. Then you need to wait till entire group is finished to get the results. These architectures need this, otherwise throughput will not be good. But big batches are not usable. We have super high efficiency even with small batch sizes." There are some notable points about these benchmarks. The first, Medina pointed out, is that their scale is logarithmic, which is needed to be able to accommodate Goya and the competition in the same charts. Hence the claim that "Habana smokes inference incumbents." The second is that results become even more interesting if power efficiency is factored in. POWER EFFICIENCY AND THE SOFTWARE STACK Power efficiency is a metric used to measure how much power is needed per calculation in benchmarks. This is a very important parameter. It's not enough to deliver superior performance alone, the cost of delivering this is just as important. A standard metric to measure processor performance is IPS, Instructions Per Second. But IPS/W, or IPS per Watt, is probably a better one, as it takes into account the cost of delivering performance. Also: Alibaba to launch own AI chip next year Higher power efficiency is better in every possible way. Thinking about data centers and autonomous vehicles, minimizing the cost of electricity, and increasing autonomy are key requirements. And in the bigger picture, lowering carbon footprint, is a major concern for the planet. As Medina put it, "You should care about the environment, and you should care about your pocket." Goya's value proposition for data centers is focused on this, also factoring in latency requirements. As Medina said, for a scenario of processing 45K images/second, three Goya cards can get results with a latency of 1,3 msec, replacing 169 CPU servers with a latency of 70 msec plus 16 Nvidia Tesla V100 with a latency of 2,5 msec with a total cost around $400,000. The message is clear: You can do more with less. TPC, Habana's Tensor Processor Core at the heart of Goya, supports different form factors, memory configurations, and PCIe cards, as well as mixed-precision numeric. It is also programmable in C, and accessible via what Habana calls the GEMM engine (General Matric Multiplication). This touches upon another key aspect of AI chips: The software stack, and integrations with existing machine learning frameworks. As there is a slew of machine learning frameworks people use to build their models, supporting as many of them as seamlessly as possible is a key requirement. Goya supports models trained on any processor via an API called SynapseAI. At this point, SynapseAI supports TensorFlow, mxnet and ONNX, an emerging exchange format for deep learning models, and is working on adding support for PyTorch, and more. Users should be able to deploy their models on Goya without having to fiddle with SynapseAI. For those who wish to tweak their models to include customizations, however, the option to do so is there, as well as IDE tools to support them. Medina said this low-level programming has been requested by clients who have developed custom ways of maximizing performance on their current setting and would like to replicate this on Goya. THE BIGGER PICTURE So, who are these clients, and how does one actually become a client? Medina said Habana has a sort of screening process for clients, as they are not yet at the point where they can ship massive quantities of Goya. Habana is sampling Goya to selected companies only at this time. That's what's written on the form you'll have to fill in if you're interested. Also: AI Startup Cornami reveals details of neural net chip Not that Goya is a half-baked product, as it is used in production according to Medina. Specific names were not discussed, but yes, these include cloud vendors, so you can let your imagination run wild. Medina also emphasized its R&D on the hardware level for Goya is mostly done. However, there is ongoing work to take things to the next level with 7 nanometer chips, plus work on the Gaudi processor for training, which promises linear scalability. In addition, development of the software stack never ceases in order to optimize, add new features and support for more frameworks. Recently, Habana also published open source Linux drivers for Goya, which should help a lot considering Linux is what powers most data centers and embedded systems. Habana, just like GraphCore, seems to have the potential to bring about a major disruption in the AI chip market and the world at large. Many of its premises are similar: A new architecture, experienced team, well funded, and looking to seize the opportunity. One obvious difference is on how they approach their public image, as GraphCore has been quite open about their work, while Habana was a relative unknown up to now. And the obvious questions -- which one is faster/better, which one will succeed, can they dethrone Nvidia -- we simply don't know. GraphCore has not published any benchmarks. Judging from an organization maturity point of view, Habana seems to be lagging at this point, but that does not necessarily mean much. One thing we can say is that this space is booming, and we can expect AI chip innovation to catalyze AI even further soon. The takeaway from this, however, should be to make power efficiency a key aspect of the AI narrative going forward. Performance comes at a price, and this should be factored in. Source
  14. SambaNova, a stealth startup, has received over $450 million to build a complete computing system of hardware and software. Co-founder Rodrigo Liang tells ZDNet the stakes are bigger than the craze for AI, it’s about a multi-decade shift in computing. The rise in deep learning forms of AI, and the abundance of real-world data behind it, is pressing the need for a new kind of computer to replace the typical Von Neumann machines expressed in processors from Intel and AMD, and graphics chips from Nvidia. But that AI revolution in hardware, as big as it is, may be just the tip of the iceberg. So says Rodrigo Liang, chief executive of startup SambaNova Systems, which has gotten over $450 million in venture capital funding to build that new kind of computer. "We are at the cusp of a fairly large shift in the computer industry," Liang told ZDNet this week in an interview by phone. "It's been driven by AI, but at a macro level, over the next twenty to thirty years, the change is going to be bigger than AI and machine learning." The last thirty years in computing, said Liang, has been "focused on instructions and operations, in terms of what you optimize for." "The next five, to ten, to twenty years, large amounts of data and how it flows through a system is really what's going to drive performance." It's not just a novel computer chip, said Liang, rather, "we are focused on the complete system," he told ZDNet. "To really provide a fundamental shift in computing, you have to obviously provide a new piece of silicon at the core, but you have to build the entire system, to integrate across several layers of hardware and software." Liang's remarks about a complete computing system place SambaNova is in the same camp as startups such as Cerebras Systems and Graphcore, which are selling appliances rather than just chips the way MobileEye and Habana Systems do. The details of SambaNova's efforts are under wraps, and Liang's comments come across as somewhat cryptic. But a glance at SambaNova's pedigree helps to illuminate things. Liang's co-founders include Stanford professor Kunle Olukotun, a pioneer in the design of multi-core processors, who founded a startup with Liang twenty years ago, called Afara Websystems, which was sold to Sun Microsystems. In a keynote talk during the 2018 NeurIPS conference on AI, Olukotun suggested next-generation computing systems needed to support emerging trends in computing workloads such as patterns and sparsity. -Kunle Olukotun Another co-founder is Stanford machine learning professor Christopher Ré, whose work includes how to develop neural networks trained with very little labeled data, known as "weak supervision." These scientists' research provides clues to what new systems perhaps should do. Olukotun's work has included "Spatial," a computing language that can take programs and de-compose them into operations that can be run in parallel, for the purpose of making chips that can be "reconfigurable," able to change ltheir circuitry on the fly. Ré and others have developed industrialized versions of his weak supervision approach, called Snorkel Drybell, about which ZDNet wrote a year ago. In a keynote address in December of 2018 at the NeurIPS conference on AI, Olukotun tied the two areas together. Snorkel is part of the shift of computer programming from hard-coded to differentiable, in which code is learned on the fly, commonly referred to as "software 2.0." A programmable logic device, said Olukotun, similar to a field-programmable gate array (FPGA), could change its shape over and over to align its circuitry that differentiated program, with the help of a smart compiler such as Spatial. In an interview in his office last spring, Olukotun laid out a sketch of how all that might come together. In what he refers to as a "data flow," the computing paradigm is turned inside-out. Rather than stuffing a program's instructions into a fixed set of logic gates permanently etched into the processor, the processor re-arranges its circuits, perhaps every clock cycle, to variably manipulate large amounts of data that "flows" through the chip. "You can map most of the operations in a TensorFlow graph to a set of primitives that look like map and reduce," he said, referring to the MapReduce programming model for distributed systems. "You say, hey, I want to take this graph of operations and map it in hardware, and then I'm going to flow the data through the graph, and then every cycle, I am going to get a new result." Today's chips execute instructions in an instruction "pipeline" that is fixed, he observed, "whereas in this reconfigurable data-flow architecture, it's not instructions that are flowing down the pipeline, it's data that's flowing down the pipeline, and the instructions are the configuration of the hardware that exists in place, kind of like an assembly line: Here come the cars, and then at every station, something happens." Something like that vision is exemplified in a 2017 research project by Olukotun called "Plasticine." Such an exotic approach would have to contend with a massive, deeply entrenched market for x86 CPUs, GPUs, and all the systems technology and tools built around them. CEO Liang is mindful of the obstacles to a new form of computing. After Afara was sold to Sun, Liang spent fourteen years running hardware for first Sun, and then Oracle after Oracle acquired Sun. The high-performance computing empire that Sun built in the nineteen-nineties was ultimately marginalized by the expansion of standard x86-based processing. "We had a great run," he reflected. The standardization of x86, and then the GPU wave following it, have limited what people can do, he said. "If you look at the last ten to twenty years, the silicon portion has been commoditized, and then the software standardized, so there has not been a lot of degrees of freedom in between to bring innovation to the end user." To try and change that, Liang and partners have assembled quite a war chest. "For this type of technology, you need to be well-funded," Liang observed. "We're in a very strong funding position now," he said, after the latest round, a C round consisting of $250 million from a group of investors composed of accounts managed by private equity giant BlackRock, along with existing investors including Google Ventures, Intel Capital, Walden International, WRVI Capital and Redline Capital. A source close to SambaNova told ZDNet the company's post-money valuation is north of $2 billion, but Liang declined to comment on valuation and ZDNet was unable to confirm the amount. More than the money, Liang sees the confluence of AI, masses of data, the arrival of industry-standard frameworks such as TensorFlow and PyTorch, and the death of Moore's Law, being a potent mix that cannot fail to open up doors for new approaches. "This is a change in the entire computing industry," said Liang, "it's not just about AI, it's not just a niche." "You need to have solutions end to end, from algorithms down to the silicon, and it's about making that available to everyone who's trying to come to grips with this transition." Source
  15. Google Translate is finally getting support for new languages. Although Google Translate already supports a wide range of different languages, there are still some languages that the service does not yet support. And believe it or not, it has been four years since Google Translate got support for a new language. That’s finally changing, though, with Google adding support for five new languages to Translate this week. Google announced on Wednesday that Google Translate is getting support for Kinyarwanda, Odia (Oriya), Tatar, Turkmen and Uyghur. Support for these new languages was made possible by the Google Translate Community and machine learning, according to Google. Google says the company has also enabled support for virtual keyboard input for Kinyarwanda, Tatar and Uyghur. These new languages are apparently spoken by more than 75 million people worldwide. Google Translate now supports a total of 108 languages, which is a pretty staggering number. Source
  16. Over the next decade, U.S. banks, which are investing $150 billion in technology annually, will use automation to eliminate 200,000 jobs, thus facilitating “the greatest transfer from labor to capital” in the industry’s history. The call is coming from inside the house this time, too—both the projection and the quote come from a recent Wells Fargo report, whose lead author, Mike Mayo, told the Financial Times that he expects the industry to shed 10 percent of all of its jobs. This, Mayo said, will lay the groundwork for, and I quote, “a golden age of banking efficiency.” The job cuts are slated to hit front offices, call centers, and branches the hardest, where 20-30 percent of those roles will be on the chopping block. They will be replaced by better ATMs, automated chatbots, and software instruments that take advantage of big data and cloud computing to make investment decisions. “The next decade should be the biggest decade for banks in technology in history,” Mayo said. It is not rare that a report forecasts the imminent erosion of an industry’s jobs picture, but it is a little rare that a prominent industry analyst for one of said industry’s largest companies is so brazen—even giddy—about trumpeting the imminent loss of those jobs. The think tanks and corporations that typically unveil findings like these are usually at least nominally worried about ‘optics’ or being diplomatic when they talk about such significant jobs elimination, but this is banking, I guess. The analysis itself is also secondary—filled with buzzwords and promises of harnessing big data and predictive algorithms that may or may not pan out to be as effective as currently thought—it is the confidence and enthusiasm for this schema that is key, as that is what will transform the report into a self-fulfilling prophecy. If the banks buy what Mayo and Wells Fargo are selling, then the report will contribute to an automated arms race between companies to cut staff and purchase enterprise financial software products that is already underway. This is how a lot of corporate automation unfolds. As a result, we can expect to interact with even more customer service chatbots and automated call menus (whether they work well or not), to see more financial decisions turned over to algorithms, and a continued flood of software products to enter the banking industry. And Wells Fargo certainly won’t be the only bank automating here: As the FT notes, Citigroup is planning to eliminate tens of thousands of call center workers, and Deutsche Bank expects to slash half its ~100,000-strong workforce. Mayo has been making the cable TV rounds, touting this incoming golden age of high-tech ultra-streamlined, automated banking, an age in which fleshy humanoid obstructions are finally smoothed out of the picture, making way for a purer, faster flow of capital from customer to banking executive. “This is fantastic,” he said on CNBC’s Squawk Box. “This should lead to record efficiency and market share gains by scale players, reflecting our theme, ‘Goliath is Winning,’” Mayo wrote in his report, and he says the same on the newscasts. Insofar as a banking analyst can claim a catchphrase, “Goliath is Winning” is Mayo’s, and he says it often—about big bank mergers, about the dominance of Morgan Stanley, and so on. The point is that banks that scale, win. Banks that grow, merge, and leverage their scale win more. And banks that have already scaled up to a nearly incomprehensible degree will win the *most*, now by automating their staff and cutting labor costs across the board—allowing, of course, more of the earnings to flow upstream and be concentrated among an ever-smaller pool of people. In the context of hundreds of thousands of people being automated out of their jobs by corporate executives, “Goliath is winning” may seem more like the kind of thing you would expect a RoboCop villain to say. But it is also almost objectively correct. ‘Goliath’ is winning, squeezing out smaller competitors and smaller banks, and Goliath is also winning by replacing its human employees with instruments that direct capital more swiftly from us to it. Now, in a just world, there would be ample safety net, health care, and employment alternatives for the tellers, call center workers, and front office staffers poised to lose their jobs, as many of them may find said jobs uninspiring. We don’t have all of those things in the U.S., of course, and the “greatest transfer from labor to capital” will leave a city-sized population jobless and struggling. On the consumer side, my concern is that we’ll be stuck, at least in the interim, with a host of frustrating automated customer service systems, and ever-fewer options when it comes to picking a bank as the margins of the massive players edge out the ones that can’t automate. It is, as ever, useful to remember to think about who automation is serving, and Mayo has made it rather explicit for us in this case—it’s Goliath. Also helpful is the ambiguity with which Mayo has painted in his automating Goliath. Goliath could refer to the banks themselves, which are ever-purer pools of capital, or it could be the elite c-suite executives at those major banking companies—they’re the chief beneficiaries of this automation, after all—or I guess it could just be capitalism itself. In the current formulation, Goliath is winning, and 200,000 people are losing. Source
  17. A Deepfake Deep Dive into the Murky World of Digital Imitation Deepfake technology is becoming easier to create – and that’s opening the door for a new wave of malicious threats, from revenge porn to social-media misinformation. About a year ago, top deepfake artist Hao Li came to a disturbing realization: Deepfakes, i.e. the technique of human-image synthesis based on artificial intelligence (AI) to create fake content, is rapidly evolving. In fact, Li believes that in as soon as six months, deepfake videos will be completely undetectable. And that’s spurring security and privacy concerns as the AI behind the technology becomes commercialized – and gets in the hands of malicious actors. Li, for his part, has seen the positives of the technology as a pioneering computer graphics and vision researcher, particularly for entertainment. He has worked his magic on various high-profile deepfake applications – from leading the charge in putting Paul Walker into Furious 7 after the actor died before the film finished production, to creating the facial-animation technology that Apple now uses in its Animoji feature in the iPhone X. But now, “I believe it will soon be a point where it isn’t possible to detect if videos are fake or not,” Li told Threatpost. “We started having serious conversations in the research space about how to address this and discuss the ethics around deepfake and the consequences.” The security world too is wondering about its role, as deepfakes pop up again and again in viral online videos and on social media. Over the past year, security stalwarts and lawmakers say that the internet needs a plan to deal with various malicious applications of deepfake video and audio – from scams, to misinformation online, to the privacy of footage itself. Questions have arisen, such as whether firms like Facebook and Reddit are prepared to stomp out imminent malicious deepfakes — used to spread misinformation or for creating nonconsensual pornographic videos, for instance. And while awareness of the issues is spreading, and the tech world is corralling around better detection methods for deepfakes, Li and other deepfake experts think that it may be virtually impossible to quell malicious applications for the technology. How Does Deepfake Tech Work? Deepfakes can be applied in various ways – from swapping in a new face onto video footage of someone else’s facial features (as seen in a Vladimir Putin deepfake created by Li), to creating deepfake audio imitating someone’s voice to a tee. The latter was seen in a recently-developed replica of popular podcaster Joe Rogan’s voice, created using a text-to-speech deep learning system, which made Rogan’s fake “voice” talk about how he was sponsoring a hockey team made of chimpanzees. At a high level, both audio and video deepfakes use a technology called “generative adversarial networks” (GANs), which consists of two machine-learning models. One model leverages a dataset to create fake video footage, while the other model attempts to detect the fake footage. The two work together until one can’t detect the other. Credit: Jonathan Hui GANs were first introduced in a 2014 paper by Ian Goodfellow and researchers at the University of Montreal. The concept was hailed as useful for various applications, such as improving astronomical images in the science industry or helping video game developers improve the quality of their games. While video manipulation has been around for years, machine learning and artificial intelligence tools used for GAN have now brought a new level of reality to deepfake footage. For instance, older deepfake applications (such as FakeApp, a proprietary desktop application that was launched in 2018) require hundreds of input images in order for a faceswap to be synthesized, but now, new technologies enable products – such as the deepfake face-swapping app Zao – to only utilize one image. “The technology became more democratized after…video-driven manipulations were re-introduced to show fun, real-time [deepfake] applications that were intended to make people smile,” said Li. Security Issues From a security perspective, there are various malicious actions that attackers could leverage deepfake for – particularly around identity authentication. “Deepfakes are becoming one of the biggest threats to global stability, in terms of fake news as well as serious cyber risks,” Joseph Carson, chief security scientist with Thycotic, told Threatpost. “Deepfakes are getting to the point that any digital audio or video online can be questioned on its authenticity and integrity, and can be used to not only steal the online identity of a victim but now the voice and face. Identity theft has now entered a new phase.” The ability to simulate someone’s image and behavior can be used by spam callers impersonating victims’ family members to obtain personal information, or criminals gaining entrance to high-security clearance areas through impersonating a government official. Already, an audio deepfake of a CEO’s fooled a company into making a $243,000 wire transfer in the first known case of successful financial scamming via audio deepfake. But even beyond security woes, far more sinister applications exist when it comes to deepfake technology. At a more high-profile level, experts worry that deep fakes of politicians could be used to manipulate election results or spread misinformation. In fact, already deepfakes have been created to portray former president Donald Trump saying “AIDS is over,” while another deepfake replaced the face of Argentine president Mauricio Macri with that of Adolf Hitler. “The risk associated with this will be contextual. Imagine a CEO making an announcement to his company, that ended up being a deepfake artifact,” said Kothanath. “Same could go to sensitive messages between country leaders that could be the beginning of a conflict.” Privacy Scares In September, the Chinese deepfake app Zao (see video below) went viral in China. The app – which lets users map their faces over various clips of celebrities –spurred concerns about user privacy and consent when it comes to the collection and storage of facial images. The idea of seamlessly mapping someone’s online face onto another’s body is also provoking concerns around sexual assault and harassment when it comes to deepfake pornography. Several reports of deepfake porn in real-life situations have already emerged, with one journalist in 2018 coming forward with a revenge porn story of how her face was used in an sexually explicit deepfake video — which was developed and spread online after she was embroiled in a political controversy. Deepfake porn also emerged on Reddit in 2017 after an anonymous user posted several videos, and in 2018, Discord shut down a chat group on its service that was being used to share deepfaked pornographic videos of female celebrities without their consent. In 2019, a Windows/Linux application called DeepNude was released that used neural networks to remove clothing from images of women (the app was later shut down). “Deepfake gives an unsophisticated person the ability to manufacture non-consensual pornographic images and videos online,” said Adam Dodge, executive director with EndTAB, in an interview with Threatpost. “This is getting lost in the conversation…we need to not just raise awareness of the issue but also start considering how this is targeting women and thinking of ways which we can address this issue.” There’s also a privacy concern that dovetails with security. “There could be many ways an individual’s privacy is compromised in the context of a media asset such as video data that is supposed to be confidential (in some cases not),” Arun Kothanath, chief security strategist at Clango, told Threatpost. “Unauthorized access to those assets leads me think nothing but compromise on security breaches.” Deepfake Detection On the heels of these concerns, deepfakes have come onto the radar of legislators. The House Intelligence Committee held a hearing in June examining the issue; while Texas has banned deepfakes that have an “intent to injure a candidate or influence the result of an election. Virginia has outlawed deepfake pornography, and just last week, California also passed a law that bans the use of deepfake technology in political speech, and for non-consensual use in adult content. When it comes to adult content, the California law requires consent to be obtained prior to depicting a person in digitally produced sexually explicit material. The bill also provides victims with a set of remedies in civil court. But even as regulatory efforts roll out, there needs to also be a way to detect deepfakes – and “unfortunately, there aren’t enough deepfake detection algorithms to be confident,” Kothanath told Threatpost. The good news is that the tech industry as a whole is beginning to invest more in deepfake detection. Dessa, the company behind the aforementioned Joe Rogan deepfake audio, recently released a new open-source detector for audio deepfakes, which is a deep neural network that uses visual representations of audio clips (called spectrograms, used to train speech synthesis models) to sniff out real versus fake audio. Images from Google’s deepfake database Facebook, Microsoft and a number of universities have meanwhile joined forces to sponsor a contest promoting research and development to combat deepfakes. And, Google and other tech firms have released a dataset containing thousands of deepfake videos to aid researchers looking for detection techniques. Deepfake’s Future Despite these efforts, experts say that many of the threats posed by deepfakes – from disinformation to harassment – are existing problems that the internet is already struggling with. And that’s something that even a perfect deepfake detector won’t be able to solve. For instance, tools may exist to detect deepfakes, but how will they stop the video from existing on – and spreading on – social-media platforms? Li said pointed out that already, fake pictures and news have spread out of control on social-media platforms like Twitter and Facebook, and deepfake is just more of the same. “The question is not really detecting the deepfake, it is detecting the intention,” Li said. “I think that the right way to solve this problem is to detect the intention of the videos rather than if they have been manipulated or not. There are a lot of positive uses of the underlying technology, so it’s a question of whether the use case or intention of the deepfake are bad intentions. If it’s to spread disinformation that could cause harm, that’s something that needs to be looked into.” It’s a question that social-media sites are also starting to think about. When asked how they plan to combat deepfakes, Reddit and Twitter both directed Threatpost toward their policies against spreading misinformation (Facebook didn’t respond, but announced in September that it ramping up its deepfake detection efforts). Twitter said that its policies work toward “governing election integrity, targeted attempts to harass or abuse, or any other Twitter Rules.” On Reddit’s end, “Reddit’s site-wide policies prohibit content that impersonates someone in a misleading or deceptive manner, with exceptions for satire and parody pertaining to public figures,” a Reddit spokesperson told Threatpost. “We are always evaluating and evolving our policies and the tools we have in place to keep pace with technological realities.” But despite these efforts, deepfake prevention efforts at this point are still reactive rather than proactive, meaning that once the bad deepfakes are live, the damage will still be done, according to Kothanath. Until that issue can be fixed, he said, the extent of damage that a deepfake can cause remains to be seen. “My worry will be the ‘fear of the unknown’ that leads in to a breach and to a privacy violation,” Kothanath said. Source: A Deepfake Deep Dive into the Murky World of Digital Imitation
  • Create New...