Jump to content

Search the Community

Showing results for tags 'supercomputer'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Categories

  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions

Categories

  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 10 results

  1. System will mix Epyc CPUs and Radeon Instinct GPUs. Enlarge / AMD CEO Lisa Su, holding a Rome processor. The large chip in the middle is the 14nm I/O chip; around it are pairs of 7nm chiplets containing the CPU cores. AMD AMD and Cray have announced that they're building "Frontier," a new supercomputer for the Department of Energy at Oak Ridge National Laboratory. The goal is to deliver a system that can perform 1.5 exaflops: 1.5×1018 floating point operations per second. By way of comparison, a single Nvidia RTX 2080 GPU manages about 14 teraflops of compute performance with 32-bit numbers. Frontier will achieve 100,000 times more. The fastest supercomputer in the Top 500 list weighs in at 200 petaflops, or 0.2 exaflops. As things stand, it'd take the top 160 machines on the list to match Frontier's performance. Frontier will use custom versions of AMD's Epyc processors (likely Zen 3 or Zen 4), matched with 4 GPUs, all connected using AMD's Infinity Fabric. Between nodes, Cray's Slingshot interconnect will be used, which has transfer rates of up to 200Gb/s per port. The GPUs will have their own stacked HBM (High Bandwidth Memory). It'll be housed in 100 cabinets, taking about 7,300 square feet of floor space. Power consumption will be 30-40MW. The plan is for Frontier to be delivered in 2021, at a cost of about $500 million for the hardware and $100 million for research and development. It should be the fastest supercomputer in the world when it's delivered, and it will be the US government's second exaflops-capable system; the first will be the 1 exaflop Aurora, built using Intel Xeon SP processors and Intel Xe GPUs. The supercomputer will be made available to academics to run a wide range of simulations and experiments. The Radeon Instinct GPUs include hardware dedicated to machine-learning workloads, and it's likely that Frontier will be used for this kind of task in addition to the more conventional weather and nuclear weapon simulations. Source: Cray, AMD to build 1.5 exaflops supercomputer for US government (Ars Technica - Peter Bright)
  2. How a supercomputer is helping AT&T prepare for extreme weather Prepping for climate change with help from a national lab Photo by Joe Raedle/Getty Images AT&T has a new climate change risk-assessment tool, developed with the help of Argonne National Laboratory’s scientists and supercomputing power, CNBC reports. The telecommunications company hopes to protect its infrastructure from the flooding and extreme weather events that are projected to increase as climate change continues. A few years ago, AT&T started thinking about the long-term risks that climate change posed to its equipment. For example, the company has cell towers and sites across the country that are vulnerable to flooding and might need to be lifted above encroaching waters. In other places, services rely on above-ground copper wires that can blow down in large storms, and which might be safer buried underground as weather patterns shift. “We just essentially did a deep dive: What was our long term planning, and how was that linked to climate change?” Shannon Carroll, director of environmental sustainability at AT&T, tells The Verge. “THE MOST INTERESTING QUESTIONS PEOPLE ARE ASKING ARE AT THOSE SCALES.” So they turned to the scientists at Argonne National Laboratory, like Rao Kotamarthi, chief climate scientist in the environmental sciences division. He and his colleagues used millions of hours of supercomputing time to analyze how wind and flood risk could change in a warmer future. But for the data to be useful, they had to use a much smaller scale than usual. “Basically, you have to model at the scale where this infrastructure exists,” Kotamarthi tells The Verge. “The most interesting questions people are asking are at those scales.” Most climate models work at the 100-kilometer (62-mile) scale, which means the data covers 100-kilometer square chunks of North America. That gives you the big picture, but not finer-grained details like what’s happening on a particular block. The Argonne team managed to get their regional climate models down to the 12-kilometer (7.5 mile) scale — and for the flooding data, down to 200 meters (656 feet). That’s key for the kind of planning AT&T wants to use that information for. “It’s all about the resolution — how close of a view can you get,” AT&T’s Carroll says. Analyzing climate data on such a small scale takes a lot of time and computing power, which makes it expensive. “The struggle is to get to those scales as much as possible but to still have some useful information,” Kotamarthi says. “How far you can go is a good question to ask.” All told, he estimates that crunching the numbers took around 80 million hours on parallel processors at Argonne National Laboratory’s supercomputer. “WE BELIEVE THAT THERE ARE LONG TERM FINANCIAL BENEFITS TO DOING THIS.” The Argonne scientists shrunk that information down and gave it to AT&T, which mixed the data with its own mapping tools that show key infrastructure like cell towers and fiber cable. “You can see the potential impacts of climate change overlaid on that visually,” Carroll says. Right now, the company is starting small and the map only covers the southeastern United States. “They’ve been hit extremely hard the last few years with severe weather events, and we have significant infrastructure there as well,” Carroll says. Ultimately, the goal is to manage the company’s risks as it looks toward the future, Carroll says. “We’re a company that’s been around for over 100 years, and we plan to be around at least another 100 years,” he says. Knowing where to place cell towers, for example, to avoid flooding or extreme winds could mean having to shell out less money for repairs in the future. “We believe that there are long term financial benefits to doing this.” Correction: Rao Kotamarthi is the chief climate scientist in the environmental sciences division, not the chief scientist of the environmental sciences division. Source
  3. AMD's Zen 4 powered El Capitan exascale Supercomputer will be on nuclear duty AMD, in partnership with HPE, announced today that its fourth-gen EPYC, based on 'Zen 4' CPU core, will be powering the Lawrence Livermore National Laboratory (LLNL)'s upcoming exascale Supercomputer dubbed the 'El Capitan'. The computer will also be featuring next-gen Radeon Instinct GPUs as accelerators. The two will work in tandem in AMD's enhanced version of the open-source heterogenous ROCm software environment. Fourth-gen EPYC processors are codenamed "Genoa" and AMD discussed it briefly back in October last year at the HPC AI Advisory Council's 2019 UK Conference. As such, the listed specification for El Capitan currently is as follows: Next generation AMD EPYC processors, codenamed “Genoa”. These processors will support next-gen memory (likely DDR5) and I/O subsystems for AI and HPC workloads, Next-gen Radeon Instinct GPUs based on a new compute-optimized architecture for workloads including HPC and AI. These GPUs will use the next-gen high bandwidth memory(HBM) and are designed for optimum deep learning performance, The 3rd Gen AMD Infinity Architecture, which will provide a high-bandwidth, low latency connection between the four Radeon Instinct GPUs and one AMD EPYC CPU included in each node of El Capitan. As well, the 3rd Gen AMD Infinity Architecture includes unified memory across the CPU and GPU, easing programmer access to accelerated computing, An enhanced version of the open-source ROCm heterogenous programming environment, being developed to tap into the combined performance of AMD CPUs and GPUs, unlocking maximum performance. All this combined is expected to surpass two exaflops of double-precision (FP64) performance making it the World's fastest Supercomputer. This will be AMD's second exascale system, the first being the Frontier OLCF-5. The El Capitan is primarily being designed to overlook the operations of the U.S. National Nuclear Security Administration, and ensure the safety and security of the U.S. nuclear reserves and arsenal. Early 2023 is when AMD believes the El Capitan should be deployed so there's still a long way to go. Source: AMD's Zen 4 powered El Capitan exascale Supercomputer will be on nuclear duty (Neowin)
  4. Microsoft unveils "fifth most powerful" supercomputer in the world Last year, Microsoft entered a partnership with OpenAI, investing $1 billion into the research firm. Through this collaboration, the Redmond giant planned to develop Azure AI capabilities in large-scale systems. At its Build 2020 developer conference today, Microsoft has unveiled the fifth most powerful publicly recorded supercomputer in the world. Built exclusively with and for OpenAI as a product of the aforementioned collaboration, the supercomputer hosted in Azure helps specifically support the training of large-scale AI models. Microsoft CTO Kevin Scott expressed delight at reaching this milestone, noting: "The exciting thing about these models is the breadth of things they’re going to enable. This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now." The key difference between the newer types of learning models to many of the others developed by the AI research community are that these excel at handling a variety of tasks at the same time, such as different components involving language, grammar, context, and more. Microsoft Turing models and supercomputing resources are planned to be made available through Azure AI services and GitHub to assist developers in leveraging their power. The supercomputer developed by Microsoft and OpenAI to enable the training of such large scale models hosts over 285,000 CPU cores, 10,000 GPUs, and 400Gb/s network connectivity for each GPU server. It also features a "robust modern cloud infrastructure" along with access to Azure services, rapid deployment, sustainable datacenters, and more. Microsoft has compared it to other systems on the TOP500 supercomputers rankings list in order to back its "top five" statement. The Microsoft Turing model for natural language generation utilizes 17 billion parameters, allowing the process of "self-supervised" learning to be carried out in a much more nuanced manner. Similarly, these next-generation AI models also offer another advantage in that they only need to be trained once with a huge amount of data and supercomputing resources. For different tasks, these can simply be fine-tuned using much smaller datasets. On the advancement to this "AI at Scale" initiative, Scott noted: "By developing this leading-edge infrastructure for training large AI models, we're making all of Azure better. We're building better computers, better distributed systems, better networks, better datacenters. All of this makes the performance and cost and flexibility of the entire Azure cloud better." Other related announcements include the introduction of a new open source deep learning library for PyTorch, named DeepSpeed, and distributed training support for ONNX Runtime. The former reduces computing power required to train large distributed models, while the latter adds support for model training and up to 17 times performance improvements to the current version of ONNX Runtime. Source: Microsoft unveils "fifth most powerful" supercomputer in the world (Neowin)
  5. Met Office and Microsoft to build weather-forecasting supercomputer System will be among top 25 supercomputers in world and twice as powerful as any other in Britain e technology will be able to produce more detailed models, which it is hoped will help the UK better prepare for extreme weather events. Photograph: Cameron Smith/Getty Images The Met Office and Microsoft are to build a world-leading supercomputer capable of providing more accurate warnings of severe weather as part of a multimillion-pound agreement. It was expected to be the most advanced machine of its kind dedicated to weather and the climate, ranking among the top 25 supercomputers in the world and twice as powerful as any other in the UK, the Met Office said. The supercomputer will have a 10-year lifespan and is due to start working from summer 2022, based in the south of the UK. Britain has been trying to better prepare for the impact of increasingly extreme storms, floods and snow. The technology will be able to produce more detailed models and improve local forecasting using high-resolution simulations, as well as supplying the aviation industry with more accurate predictions of wind and temperature. Penny Endersby, the chief executive of the Met Office, said people might not realise the full effects the supercomputer would have because it would help prevent disasters such as flooding. “In the short term, you will see a more accurate weather forecast that may be more detailed to your area and you may be able to tailor it more, but actually it impacts your lives in ways you don’t know about because, for example, we provide services to aviation that enables planes to fly more efficiently and safely by knowing exactly where the winds are going to be and where turbulence is going to be, so you won’t realise that the supercomputer is making your flight safer, smoother, more efficient, but it will be,” she said. “Equally, as climate change develops and policymakers make choices about how much bigger a tidal barrier we need or where to build flood defences, your home won’t flood and you’ll think you were lucky, but you won’t be lucky, other people will have planned for you, to say, well actually, we know how big a tidal surge could get because it’s been well modelled, we understand the flood risk from more intense surface rainfall and therefore we’ve prepared in advance, so a lot of that is actually the downsides you won’t see as well as the upsides you will see.” It will also be one of the world’s most environmentally sustainable supercomputers, powered using 100% renewable energy. Source: Met Office and Microsoft to build weather-forecasting supercomputer
  6. The world's most powerful supercomputer is now up and running Japan's Fugaku supercomputer is likely to become researchers' new favorite toy. After seven years in the works, the world's fastest supercomputer has officially been completed in Japan and is now available for researchers to start using, for projects ranging from fighting climate change to discovering new drugs. Builds for the Fugaku supercomputer, hosted at Japanese scientific research institute Riken, started in 2014 in collaboration with Fujitsu, with the device pitched to become a future pillar of the country's high-performance computing infrastructure. The delivery of Fugaku's total 432 racks was completed in May 2020; since then, trials of the system have been on-going, mainly with projects aiming to accelerate research to combat the COVID-19 pandemic. The computer is now fully open for shared use, and Japan's Research Organization for Information Science and Technology (RIST) has already selected 74 research projects that will be implemented from next month. RIST has also urged researchers to submit proposals for new projects, and invited all applications to be sent in as part of a call for Trial Access Projects. Together with Riken, Fujitsu will continue to monitor the operation of Fugaku to ensure stable performance, while also working to enhance the user environment, and to provide better supercomputing technologies. "The ultra-high-performance computer Fugaku is about to go into full-scale operation," said RIST president Yasuhide Tajima. "I look forward to seeing this most powerful 'external brain' ever developed by humanity helping to expand our knowledge, enabling us to gain deeper insights into the foundation of matter in both time and space, giving us better structural and functional analysis of life, society, and industry, enabling more accurate predictions; and even designing the unknown future of humanity." Since last year, trials of the system have been on-going, mainly with projects aiming to accelerate research to combat the Covid-19 pandemic. Image: Riken/ Fujitsu Fugaku is designed to carry out high-resolution, long-duration, and large-scale simulations, and boasts up to 100 times the application performance of its predecessor, the K supercomputer, which was decommissioned in 2019. This unprecedented compute power has earned the device the top spot for two consecutive terms in the Top500 list, which classifies the 500 most powerful computer systems around the world. At 442 petaflops, Fugaku stands a long way ahead of competitors, with three times more capability than the number two system on the list, IBM's Summit, which has a performance of 148.8 petaflops. Paired with AI and data science, these simulations are expected to provide high-level results to solve problems at a new scale. Among the many anticipated outcomes feature high-speed and high-precision drug discovery simulations, early detection of diseases, accurate predictions and simulation of natural disasters, creation of new materials for next-generation batteries or fuel cells, and even increased insights into fundamental science questions such as the creation of the universe. Results from the trials carried out with the supercomputer are already promising. Researchers in Japan have been using Fugaku to test the efficiency of existing drugs against Covid-19, as well as to find ways to mitigate Covid-19 transmission through detailed droplet analysis. In a separate project, Japan's Tokyo Medical and Dental University (TMDU) and Fujitsu Laboratories revealed that the supercomputer had enabled them to achieve cancer gene analysis in less than a day, instead of months. By allowing for a better understanding of the relationship between cancer cells and cancer-related genes, the study could help establish new cancer therapies. "This is just the beginning for Fugaku, and we are looking forward to seeing it truly demonstrate its tremendous potential," said Riken president Hiroshi Matsumoto. "Above all, Fugaku is a key national technology, and we will manage it responsibly with the goal to achieve research results that will help build a long-lived and healthy society, disaster mitigation, and better energy use, with the ultimate goal to establish the government's vision of an ultra-smart Society 5.0." Alongside Fugaku, Japan holds another 33 supercomputers in the latest Top500 list; and although the country has firmly settled in the top space, other nations are ramping up their efforts to develop ever-more powerful devices. The US, for example, is currently building two exascale computing systems expected to launch next year. China and the EU have also both announced projects to develop exaflop-capable supercomputers in the next few years. Source: The world's most powerful supercomputer is now up and running
  7. Nvidia used AMD Epyc CPU to develop a seriously impressive supercomputer Team Green opts for Team Red technology (Image credit: Shutterstock / Timofeev Vladimir) The recently published rankings of the world's most powerful supercomputers contained more than just the news that ARM now powers the fastest machine on the planet (with the A64FX inside). Sitting at number 7 in the list is a new entry, called Selene. What makes this supercomputer so special is not that it contains AMD’s Rome EPYC 7742 processors (64 cores, clocked at 2.25GHz) and Nvidia’s latest A100 accelerators. No, the interesting part is that the machine is owned by Nvidia itself. Why the company needs such a powerful supercomputer remains a mystery; perhaps to wow potential customers or to trial its cutting edge technology live. Selene has 277,760 cores and half a petabyte of memory, with Nvidia’s own Mellanox HDR Infiniband connecting the lot. But, at 27.58 petaflops, the machine is still significantly less powerful than Fugaku, which currently reigns supreme. In other developments in the world of supercomputing, AMD and HPE’s Cray have won two further contracts with the US Department of Energy to build two supercomputers that will crack the exaflop barrier. The first, Frontier, is expected to be unveiled next year. The second, El Capitan, should enter operation in 2023 with a peak performance of two exaflops - roughly 80x faster than Selene. Nvidia used AMD Epyc CPU to develop a seriously impressive supercomputer
  8. NVIDIA partners with University of Florida to build AI supercomputer University of Florida (UF), in partnership with NVIDIA, is building the world's fastest artificial intelligence-based supercomputer in higher education. When completed, the supercomputer would run at 700 petaflops. The project is estimated to cost about $70 million, out of which $25 million is from NVIDIA co-founder Chris Malachowsky, who went to UF, and $25 million from the firm in hardware, software, training, and services. The rest would come from UF that will be used to build a data center. With the partnership, UF can become one of the leading AI universities in the U.S., and advance academic research. This effort will also assist in addressing some of the state's most complex challenges, including rising seas, aging populations, food insecurity, and more. Ron DeSantis, Florida's Governor, remarked: “This is going to be a tremendous partnership. As we look to keep our best talent in-state, this will be a significant carrot, you’ll also see people around the country want to come to Florida.” Additionally, UF will work closely with the Santa Clara firm to improve its existing supercomputer, HiPerGator, by integrating the NVIDIA DGX SuperPOD architecture. The system will be functional by early 2021. NVIDIA partners with University of Florida to build AI supercomputer
  9. Codeplay inks landmark deal with U.S. government to enable next-generation supercomputer The AI software firm will work with researchers in the high-performance compute community to analyze big data to simulate future pandemics and other areas. Image: iStockphoto/niplot The National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory, in collaboration with the Argonne Leadership Computing Facility, is partnering with UK-based Codeplay Software to enhance GPU compiler capabilities for NVIDIA. This collaboration will help NERSC and ALCF users, along with researchers in the high-performance computing community, to produce high-performance applications that are portable across compute architectures from multiple vendors. Today, most artificial intelligence software, including for cars, is developed using graphics processors designed for video games, according to Codeplay. The company provides tools designed to enable software to be accelerated by graphics processors or the latest specialized AI processors. NVIDIA A100 GPUs will power NERSC's next-generation supercomputer, Perlmutter, Codeplay said. NERSC supercomputers are used for scientific research by researchers working in diverse areas such as alternative energy, environment, high-energy and nuclear physics, advanced computing, materials science and chemistry. Enabling treatments and strategies to combat the pandemic Over the past year, 20 research teams have been involved in COVID-19 simulations for analysis and to develop solutions to combat the virus, Codeplay said. ALCF supercomputers enable scientific research and engineering by offering supercomputing resources and hands-on expertise to the research community. These systems have helped advance science computing in an array of areas through the convergence of simulation, data science and machine learning methods. The simulations have accelerated the development of treatments and strategies to help fight the COVID-19 virus. The power of open source Today the SYCL open standard programming model supports a variety of accelerators through multiple implementations, Codeplay said. SYCL will be supported on the forthcoming Department of Energy Aurora exascale supercomputer and, with this work, it can be used with Perlmutter to help scientific app developers and users to be more productive, according to Codeplay. "With thousands of users and a wide range of applications using NERSC's resources, we must support a wide range of programming models," explained Brandon Cook, application performance specialist at NERSC, in a statement. "In addition to directive-based approaches, we see modern C++ language-based approaches to accelerator programming, such as SYCL, as an important component of our programming environment offering for users of Perlmutter." "As a key programming model for Argonne's upcoming exascale system, SYCL and DPC++ will benefit the broader DOE community by providing portability of accelerator programming models across DOE computing facilities," said Andrew Richards, founder and CEO of Codeplay Software, in a statement. "We are delighted to see the SYCL programming standard being embraced by the U.S. national labs and providing scientists developing accelerated C++ with a standardized software platform," Richards said. The use of high-performance computing (HPC) in data modeling, AI and analytics has already significantly exceeded expectations, Codeplay said. But the next decade will see explosive growth in capability and performance, achieved with special new processors and industry-standard software programmability. Today, many semiconductor and processor companies have their own specialized processor architecture tuned for complex AI functions. The latest AI applications are using neural networks to enable machine learning applications, and these processors enable NN operations to be performed with greater efficiency, according to the company. However, as processor design tries to catch-up on AI needs, research continues to evolve, extending the processing needs of the underlying hardware and software can progress independently is crucial to compete. Codeplay is "a big believer in open standards,'' Richards said. The company has developed a range of products called ComputeSuite that aim to bridge the gap between the latest AI processors and AI application developers using established open standard interfaces. Source: Codeplay inks landmark deal with U.S. government to enable next-generation supercomputer
  10. Nvidia will build the 'world's fastest AI supercomputer' Cineca and Nvidia team up for a European Exascale supercomputer for AI (Image credit: Atos) Nvidia and Cineca, an Italian inter-university consortium and major supercomputing center, have announced plans to build ‘the world’s fastest AI supercomputer.’ The upcoming Leonardo system will use nearly 14,000 Nvidia A100 GPUs for a variety of high-performance computing tasks. The peak performance of the system is expected to hit 10 FP16 ExaFLOPS. The supercomputer will be based on Atos’ BullSequana XH2000 supercomputer nodes, each carrying one unknown Intel Xeon processor, four Nvidia A100 GPUs and a Mellanox HDR 200Gb/s InfiniBand card for connectivity. The blades are water cooled and there are 32 of them in each HPC cabinet. (Image credit: Atos) The BullSequana XH2000 architecture is very flexible, so it can house any CPU and GPU and, to that end, we can only guess which Intel Xeon processor will be used for Leonardo. (Image credit: Atos) Scientists from Italian universities plan to use Leonardo for drug discovery, space exploration and research, and weather modelling. Traditionally, such applications rely on high-performance simulation and data analytics workloads that require FP64 precision. But Nvidia says that today many HPC tasks rely on powerful artificial intelligence and machine learning - and for such workloads FP16 precision is enough. Quite naturally, a massive number of GPUs can also perform high-resolution visualizations. Nvidia’s A100 GPU was designed primarily for computing, so it supports all kinds of precision, including ‘supercomputing’ FP64 and ‘AI’ FP16. 14,000 Nvidia A100 GPUs can achieve up to 8.736 FP16 ExaFLOPS (624 TFLOPS per GPU with structural sparsity enabled × 14,000) performance. Meanwhile, the same number of GPUs can provide 135,800 FP64 TFLOPS, which is slightly below Summit’s 148,600 FP64 TFLOPS. Nvidia believes AI and ML are crucial for today’s supercomputer, so the company prefers to quote peak FP16 performance with structural sparsity enabled, in the case of the Leonardo supercomputer powered by its A100 GPUs. “With the advent of AI, we now have a new metric for measuring supercomputers. As a result, the performance of our supercomputers has exploded as the computational power of them has increased exponentially with the introduction of AI," Ian Buck, VP and GM of Accelerated Computing at Nvidia, told TechRadar Pro. "Today’s modern supercomputers must be AI supercomputers in order to be an essential tool for science. Nvidia is setting a new trend by combining HPC and AI. Only AI supercomputers can deliver 10 ExaFLOPS of AI performance featuring nearly 14,000 NVIDIA Ampere architecture-based GPUs.” Sources: Nvidia press release, Nvidia blog post Nvidia will build the 'world's fastest AI supercomputer'
×
×
  • Create New...