Jump to content

Search the Community

Showing results for tags 'gpu'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Categories

  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions

Categories

  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

  1. Pixel 5 sees dramatically improved GPU performance after April patch It's not an amazing improvement; the Pixel 5 GPU was just really bad to begin with. Enlarge / Our Pixel 5 came in this funky green version. Ron Amadeo Nearly six months after the release of the Pixel 5, Google has revamped the way the GPU works, bringing the phone up to the level you'd expect from a Snapdragon 765G phone. The April 2021 security update for the Pixel 5 and Pixel 4a (5G) came with a note saying that it includes "performance optimizations for certain graphics-intensive apps & games." Hands-on testing reveals the update apparently dramatically improves the Pixel 5 GPU, reportedly showing performance gains of up to 50 percent in some apps and benchmarks. We don't have a Pixel 5 on hand to test, but Andreas Proschofsky from Der Standard tipped off the Internet that he's seeing "30-50 percent better" performance in 3DMark after the update. Andrei Frumusanu from Anandtech confirmed "performance has been essentially doubled from the scores published [in Anandtech's review], and in line or better than other 765G phones," adding that "the fact it took 6 months is sad, though." Hmm. Yes. It might seem impossible to add 50 percent better performance from a mere software update, but Google is just fixing the terrible state of the launch phone. There was simply that much room for improvement relative to other phones. When we reviewed the device, we called it "the world's slowest Snapdragon 765G phone," noting that other Snapdragon 765G smartphones like the OnePlus Nord could wipe the floor with the device in head-to-head GPU benchmarks. It wasn't a great look for the Pixel 5, which was already facing a switch to mid-range hardware that meant it would be slower than the Pixel 4. Benchmarks allow us to put exact numbers on the changes, but this isn't a trick of benchmarking; the numbers reflected real-world performance when it came to 3D gaming, which was terrible on the Pixel 5. Google says the April 2021 security update also comes with camera quality improvements for third-party apps, a fix for an issue that would cause freezing on startup, and a fix for some missing home screen settings. The update should be rolling out now to Pixel phones—just mash that update button. Pixel 5 sees dramatically improved GPU performance after April patch
  2. AMD announces $479 Radeon RX 6700 XT, says it will have ‘significantly more GPUs available’ ‘We know it’s crazy out there, we’re doing everything we can’ AMD has heard you loud and clear: you can’t buy its excellent RX 6800 and 6800 XT graphics cards at anything close to their retail prices. Today, the company’s announcing a new GPU that might (but probably won’t?) change that: the Radeon RX 6700 XT. “With the AMD Radeon RX 6700 XT launch, we are on track to have significantly more GPUs available for sale at launch,” AMD tells The Verge. Even better: AMD claims it’ll begin refreshing stock of RX 6000 GPUs and Ryzen 5000 CPUs every week on its own website, where it’ll sell them at their retail prices. We’ve been waiting for that for nearly two months. The new RX 6700 XT will arrive on March 18th for a suggested retail price of $479. In a normal, sane year, that would slot it between Nvidia’s $500 RTX 3070, which we called “the 1440p sweet spot,” and Nvidia’s bang-for-the-buck $400 RTX 3060 Ti, where you might have to dial down the settings here and there. It’s also a full $100 less than AMD’s $579 RX 6800, which we found had enough oomph for entry-level 4K gaming. This isn’t a sane period for GPU buyers, though. In December, the actual street prices of the $400 3060 Ti, the $500 3070, and the $579 RX 6800 were $675, $819, and $841, respectively — and that was before Trump’s tariffs pushed Nvidia and AMD’s board partners to raise their retail prices. “We know it’s crazy out there, we’re doing everything we can,” says AMD’s Nish Neelalojanan, a director of product management. That not only includes more stock at AMD.com but also additional supply for board partners and manufacturers that’ll sell gaming PCs later on. AMD wouldn’t say how much of that stock is being allocated toward GPUs that’ll be sold at AMD.com, though. It says it doesn’t set its partners’ retail prices either. Assuming for a moment that AMD pulls it off, managing substantially greater availability than Nvidia’s recent debut, the RX 6700 XT sounds like it could be a compelling pick. With 230 watts of power, 12GB of video memory, and 40 compute units (compared to 60 for the RX 6800 and 80 for the RX 6800 XT), AMD’s promising you’ll be able to play all of the latest games at maximum settings at 1440p resolution. With a fast enough CPU, AMD suggests you should be able to hit 212 fps in Overwatch, 272 fps in League of Legends, and 360 fps in Rainbow Six Siege, enough for esports gamers to justify some of the fastest monitors on the market. AMD says it should be fast enough for ray tracing at 1440p as well. The company’s early benchmarks (see above) show it pulling ahead of Nvidia’s 3070 and 3060 Ti, though not in all games. It’s worth noting these numbers were generated using the frame rate boost of AMD’s Smart Access Memory (generically known as Resizable BAR), something that’s only just starting to roll out to Nvidia’s graphics cards and generally requires newer CPUs to work. That said, AMD also just announced that it will begin rolling out Resizable BAR to its Ryzen 3000-series processors, not just the newer Ryzen 5000 ones. The new card will require two power connectors, an 8-pin and a 6-pin, and the GPU should be clocked somewhat faster than in earlier RX 6000-series cards at up to 2424MHz. It’s got a 192-bit memory bus, down from 256-bit for the company’s other 6000-series cards. Clearly, we’ll have to test the RX 6700 XT’s performance ourselves, but nothing matters more than availability — and where that availability will leave the card’s actual price by the time you can buy one. AMD claims cards will be available on March 18th from all of the usual board partners, 40 different system builders, and AMD.com, with prebuilt systems including the HP Omen 25L and 30L desktops coming later this spring and beyond. While AMD’s own version is a dual-fan card with the same basic reference design as the RX 6800 and 6800 XT, it appears many of AMD’s partners are opting for three-fan designs. Those generally carry a premium price as well. And in case you’re wondering, AMD has no plans to nerf the crypto mining performance of the RX 6700 XT the way Nvidia did for Ethereum with the RTX 3060. “We have no plans to limit the product in that way,” AMD told journalists this week. You can watch the company’s 20-minute presentation, including a tease of Resident Evil Village with ray tracing, in the video below. Update, 11:36AM ET: Added that the HP Omen and other prebuilts will arrive later this spring. AMD announces $479 Radeon RX 6700 XT, says it will have ‘significantly more GPUs available’
  3. Intel’s new desktop GPUs won’t work in AMD systems Intel’s first desktop GPUs in 20 years need a special BIOS Intel launched its first Iris Xe desktop graphics cards yesterday, but you won’t see them appearing in AMD-powered systems. While Nvidia and AMD’s desktop GPUs typically work across a variety of Intel and AMD processors, Intel’s new desktop GPUs are a little more limited for now. “The Iris Xe discrete add-in card will be paired with 9th gen (Coffee Lake-S) and 10th gen (Comet Lake-S) Intel® Core™ desktop processors and Intel(R) B460, H410, B365, and H310C chipset-based motherboards and sold as part of pre-built systems,” says an Intel spokesperson in a statement to Legit Reviews. “These motherboards require a special BIOS that supports Intel Iris Xe, so the cards won’t be compatible with other systems.” One of Intel’s first Iris Xe desktop GPUs. Image: Intel The restrictions make more sense when you consider the target market of these first Intel desktop GPUs. Intel is working with Asus and other vendors to sell these cards to system integrators who will bundle them with prebuilt systems. These aren’t GPUs you can just order online, and they’re meant to be specially bundled. Intel’s idea with its initial Iris Xe desktop GPUs is to simply improve what’s available on mainstream PCs right now. Most standard business-focused PCs ship with integrated graphics, and Intel is trying to offer something that improves multi-display support and hardware acceleration. These cards aren’t designed to improve gaming or to be used in gaming rigs. Intel is also working on its Xe-HPG architecture that could eventually deliver cards that can compete with AMD and Nvidia. Hopefully these cards won’t be restricted to Intel systems in a similar way, though. Intel’s new desktop GPUs won’t work in AMD systems
  4. Intel's Iris Xe discrete desktop graphics card is finally here Today is a historic day for PC as Intel, after failing to do so in the past, has finally launched its first discrete GPU based on Iris Xe architecture. The company announced its plans for DG1 (short for Discrete Graphics 1) last year at CES 2020 where we were teased with an early variant of the GPU running Destiny 2. Since then, we have seen the performance of Intel's Xe-LP in Tiger Lake laptops. For example, the Dell XPS 13 9310 that we reviewed has the Intel Core i7-1165G7 with integrated 96 Iris Xe Execution Units (EUs). The Iris Xe desktop version that Intel launches today has 16 fewer units at 80. However, the performance of the graphics card may not be far behind the 96 EU mobile integrated variant due to the former being a desktop part and shouldn't be constrained by heat or power like in laptops, and the GPU also has the entire memory bandwidth as well as the dedicated 4GB VRAM all to itself. Intel has worked with ASUS and another AIB partner to release the Iris Xe desktop card. A couple of pictures of the cards from the two partners has also been unveiled. Oddly, Intel has referred to the other AIB partner simply as "other partners" (and it is allegedly Colorful according to German media outlet Hardwareluxx) Above we have the ASUS' variant that appears to be a passively cooled solution, which could be ideal for a silent PC setup, while below is the alleged Colorful's more colorful dual-fan edition. As Iris Xe features AV1 decode acceleration - something which other lower-end desktop GPUs currently lack - it should be useful for modern Home Theatre PCs (HTPC) or Media PCs. Iris Xe also supports DP4a deep-learning inference acceleration and could be helpful as a cheap accelerator for AI and ML workloads to assist the CPU. For now, Intel's Iris Xe discrete graphics will be sold by OEMs and system integrators only. Intel also seems to be hinting that systems with this GPU won't be too pricey as they will be targeting the lower end "high-volume, value-desktop market" with this. So while Intel currently isn't looking to lock horns with the big boys AMD and Nvidia at the high end, they definitely plan to do so sometime later in the year. Source and images: Intel Intel's Iris Xe discrete desktop graphics card is finally here
  5. AMD could have a cunning plan to cut the cost of its future graphics cards Or at least ensure that they don’t become even more expensive... (Image credit: Future) AMD’s graphics cards could be very different in the future, switching to use a multi-chip module (MCM) design, at least according to a freshly spotted patent. Notebookcheck.net highlighted the discovery of the patent by hardware leaker @davideneco25320 on Twitter, and it’s an interesting read for sure, providing a potential glimpse of how AMD is set to shift its GPU design in order to keep a lid on spiralling graphics card prices, and better compete with Nvidia (and indeed Intel for that matter). The broad idea, in simple terms, is to use MCM or multiple chips (‘chiplets’) on one board – as AMD already does with its Ryzen processors – as opposed to the current monolithic (single chip) design. The move to MCM could confer a number of benefits in terms of ensuring better yields as graphics cards become more and more powerful, and their design becomes more demanding to figure out and implement while being able to keep costs down suitably. As we’ve seen in recent times, GPUs, or certainly the more powerful ones, have already become eye-wateringly expensive. However, there are serious issues in making the change to an MCM model in terms of the way graphics cards work, but the AMD patent outlines how to tackle these thorny problems. New way forward The main stumbling blocks with an MCM design lie in the fact that games are programmed specifically to work with a single GPU, so this new way of doing things – which is effectively using multiple GPUs on a single board – is problematic in that respect. And it’s also a tricky matter to implement parallel workloads across multiple chiplets anyway, while keeping memory content in sync across them. AMD’s solution in the patent is to hook up these GPU chiplets via a high bandwidth passive crosslink, with one of these chiplets being the primary GPU, as it were, with that directly connected to the CPU – meaning the processor (and OS) would see the graphics card as just a single (monolithic) entity in terms of coding software or games for it. Furthermore, to try and tackle the aforementioned memory content issues, each GPU chiplet would have its own last-level cache, and these would be connected in a way to ensure coherency across all the chiplets. When might this new design actually happen? It is feasible that AMD could be looking to MCM technology for next-gen RDNA 3 graphics cards, but that could be optimistic, and perhaps further down the line – maybe RDNA 4 – would be a more likely prospect. This is all so much guesswork at this point, of course, and we can’t read too much into a single patent anyway. These kind of design concepts are often exploratory or experimental in nature, after all. But it does show the direction AMD intends to travel in, or is at least seriously considering, while casting a light on potential solutions to the major drawbacks that traditional monolithic designs are beset with. As we head further into the future, these kind of graphics cards could be increasingly difficult to manufacture while keeping yields at a palatable enough level (or in other words, keeping costs down). AMD isn’t the only firm thinking this way, as you might expect, with Nvidia exploring the use of MCM itself for Hopper graphics cards, rumor has it, and indeed Intel with Xe HP Arctic Sound. Remember that Intel is expected to compete with Nvidia and AMD in the heavyweight gaming arena this year with the launch of its Xe-HPG card. AMD could have a cunning plan to cut the cost of its future graphics cards
  6. Nvidia's next-gen GPUs may honor British mathematician Ada Lovelace Lady Lovelace might be the next Nvidia GPU namesake (Image credit: Nvidia) Nvidia might be delaying its latest multi-chip GPU, codenamed Hopper, to bring a new single-chip GPU architecture series to market first that will apparently honor the pioneering British mathematician Ada King, Lady of Lovelace, who is credited as writing the world first computer program in 1843. Codenamed Lovelace, the forthcoming GPU architecture will feature Nvidia's latest 5nm design, meaning that both Samsung and TSMC could manufacture the die. That said, it's unclear whether either have secured a contract to do so or if the new architecture is even at that stage of development yet. A fairly reliable source for Nvidia news online, kopite7kimi, wrote earlier this month about the delay of Hopper and the possible rewriting of Nvidia's planned GPU roadmap. The multi-chip Hopper architecture - named after another computer science pioneer, Grace Hopper - is more likely destined for the high performance computing market which took a hit in 2020 due to the coronavirus pandemic, though that doesn't necessarily mean that this is the cause of the rumored delay. Likewise, it isn't known whether Lovelace is destined for the gaming GPU market or not, but it's very likely to be on Nvidia's gaming GPU roadmap, according to Videocardz. Still, without an official announcement from Nvidia, it's largely just speculation at this point. Who is Ada Lovelace? If Nvidia does name its next consumer-market GPU architecture after Ada Lovelace, it would be a fitting tribute to a computer pioneer who went unrecognized for over a century after her death. A brilliant mathematician, Lovelace published an algorithm in 1843 describing how Charles Babbage's proposed Analytical Engine - the world's first mechanical computer (as we understand that term today) - that could be used to compute Bernoulli numbers. Her algorithm is now widely considered the world's first computer program, making her the world's first computer programmer, though she is still largely unknown outside of computer science departments around the world. And while Nvidia lists her along with Hopper and Margaret Hamilton on its "Company of Heroes" t-shirt, we hope Nvidia's recognition of Lovelace's contribution doesn't end there. Nvidia's next-gen GPUs may honor British mathematician Ada Lovelace
  7. Intel Xe DG1 graphics card 3DMark leak again suggests AMD and Nvidia won’t be troubled But at least this rumor is more promising than the last leak we saw for DG1 (Image credit: Shutterstock) Intel’s Xe DG1 graphics card has been spotted in a 3DMark benchmark, or at least the rumor mill believes that result is for Intel’s first crack at a discrete GPU. As ever, we shouldn’t read too much into this given that it is just speculation that this is DG1, although the source is a reliable one, the ever-present TUM_APISAK. But even if the leak is on the money, remember that this is an early sample GPU, and won’t reflect the exact performance Intel may achieve with the final product. At any rate, the purported DG1 graphics cards scored 5,538 in 3DMark’s Fire Strike test (paired with an Intel Core i9-9900K processor) and hit a graphics score of 5,960. That’s not a massively impressive result, but as we’ve already mentioned, it must be treated with caution. It’s in the ballpark of a graphics card as old as the GeForce GTX 750 Ti, albeit a bit faster than that veteran GPU (which scored 5,402 for graphics in a 3DMark result highlighted on Twitter). As Wccftech, which spotted this, observes, it’s a fair way behind the GTX 1050 to pick out another example from Nvidia’s line-up – that previous-gen budget card is around 500 points to 800 points better than the DG1 depending on which 3DMark result you look at. No cause for concern? Anyhow, you get the idea – and as with a previous Geekbench result, which showed that the DG1 wasn’t much better than Nvidia’s low-end MX250, the overall vibe thus far is that Intel’s initial product is not going to be causing either AMD or Nvidia any sleepless nights. That said, at least this new 3DMark leak shows the Intel GPU comfortably outdoing the likes of the MX350, by around a third in terms of that graphics score in fact. Further remember that Intel’s first GPU is likely to be a testing the waters affair, and as we’ve previously heard via the rumor mill, it’s going to be a mobile part – in other words, a graphics card for laptops, not a GPU for a desktop PC. With further development, perhaps it could start to worry Intel’s rivals at least in the notebook arena – particularly when combined with the potential of Xe integrated graphics with Intel’s Tiger Lake mobile processors. Intel Xe DG1 graphics card 3DMark leak again suggests AMD and Nvidia won’t be troubled
  8. AMD just revealed its Big Navi graphics card design on Fortnite of all things Welcome to 2020, folks (Image credit: AMD) We're a little more than a month out from the reveal of AMD Big Navi graphics cards, but Team Red has just graced us with the first look at its upcoming Radeon RX 6000 Series. AMD Radeon took to Twitter late Monday afternoon, revealing what looks to be an absolute monster of a graphics card, along with a brand-new cooler design. And AMD fans can even get a closer look at the card on Fortnite of all things, through AMD's Battle Arena in the same game. If you want to log in and take a look at the card, you can do so with code "8651-9841-1639". Luckily, if you don't want to go through downloading Fortnite and navigating to AMD's custom map, some friendly users have shared plenty of screenshots on Twitter. AMD hasn't shared anything in terms of specs or performance, but at least we know for sure that the company is indeed going with the Radeon RX 6000 series as a name for the upcoming graphics cards. The rest we'll just have to wait until October 28 to see for sure. Let's do some speculation anyways, though While AMD hasn't shared any hard specs or performance information, that doesn't mean we can't learn a bit about the upcoming graphics cards by looking at the pictures. The first thing that's immediately apparent is that AMD's reference board for whatever card it is – probably the flagship – is using a triple-fan design. This isn't exactly new ground for AMD to cover, 2018's Radeon VII also had a triple-fan reference design, but it does mean that this is likely going to be a bit of a monster graphics card. For instance, the AMD Radeon RX 5000 series, led by the RX 5700 XT, all used single-fan blower-style reference coolers. That decision wasn't exactly popular, but the cards didn't run into much problem keeping themselves cool – at least in our testing. The move to a triple-fan design could be a sign that the graphics card will need more significant cooling, either because the power requirements are much higher or AMD is really pushing clocks high (or both). Another thing we can glean from this image is the 2 x 8-pin PCIe power connection. This configuration can deliver up to 375W, so it honestly doesn't really tell us much about how much juice the graphics card will take – both the RTX 3080 and RTX 3090 require the same power connectors, even if you have to connect those to a 12-pin adaptor for the Founders Edition. It does tell us that AMD is not adopting the 12-pin power connector, however, which will likely be excellent news for anyone who doesn't like living the dongle life on their desktop computer. We're currently in the process of downloading Fortnite to see what's happening in closer detail, but you can be sure we'll update this article if there's some significant detail we've missed. AMD just revealed its Big Navi graphics card design on Fortnite of all things
  9. Wait for AMD to make its Big Navi move before you buy the Nvidia RTX 3080 Wait for both sides to shoot their shot (Image credit: Nvidia) Earlier this week, Nvidia unveiled its Ampere GeForce cards, led by the RTX 3080 and accompanied by the Titan replacement RTX 3090 and the RTX 3070. All three of these graphics cards are bringing major generational improvements over the 2018 Turing lineup, without another price increase like we saw last time around. And, while it's definitely exciting seeing the promise of such improved performance, it's important to note that Nvidia isn't the only company that's going to be releasing new graphics cards this fall – AMD Big Navi is coming soon, too. Beyond claims made on behalf of the PS5 and Xbox Series X – that those consoles will be capable of 4K 60 fps gameplay with ray tracing – we don't really know what RDNA will be capable of, or where exactly it will be competing within this new Nvidia lineup – but you should still wait to see the hand AMD has to play before you jump in. (Image credit: Future) AMD RDNA 2 could also slap The GPUs in the PS5 and Xbox Series X are super impressive. For a console. It is kind of ludicrous to believe that the GPUs being included in the consoles are not going to be cut down versions of whatever ends up in actual AMD graphics cards, in order to fit the cooling and power requirements that a console has. Think about it – there's no way that Microsoft and Sony are going to be stuffing a 750W+ power supply in a console, so we're going to get a way less powerful GPU. The actual graphics cards that come out of RDNA 2 will likely be way more powerful than what ends up going into the gaming consoles. AMD has already said that it's going to be competing with Nvidia at the high end – maybe not as high end as the Nvidia GeForce RTX 3090 – so it's very possible that whatever top-end graphics card AMD launches will actually put some pressure on Nvidia. If AMD can launch a graphics cards that tackles the the RTX 3080 at 4K, or even outperforms it at the same price, anyone who buys an RTX 3080 might feel a little burned. It's also important to note that AMD didn't knock Intel out in the CPU race right away, either. The first generation of Ryzen was impressive, but didn't really threaten Intel too much – similar to the launch of the AMD Radeon RX 5700 XT in July 2019. However, if AMD takes the same approach with RDNA that it did with Zen, this generation we could see AMD really start to hit Nvidia hard – though it does remain to be seen. Nvidia isn't Intel, after all, and it looks like Ampere is the biggest jump in GPU performance in years, something that can't really be said about, say, Coffee Lake. (Image credit: Nvidia) Only three Nvidia cards to start At its GeForce Special Event, Nvidia only had three cards to show, and while it's likely that those will be the only Nvidia cards we get this year, the gaps in performance and price between the three are pretty huge. Nvidia is claiming that the GeForce RTX 3070 is slightly faster than the RTX 2080 Ti, whereas the RTX 3080 is apparently nearly twice as fast as the RTX 2080 – that's a huge gap in performance, and is clearly open for Nvidia to slide in another graphics card. Now, for the sake of argument let's say that AMD launches a card that comes close to beating the RTX 3080 at the same price point. We already saw a Lenovo leak a little while back that showed an RTX 3070 Ti with 16GB of VRAM that was conspicuously absent from Nvidia's showcase. It's important to keep in mind that graphics card lineups never all come out at the same time. While the RTX 2080 Ti, RTX 2080 and RTX 2070 all got announced and launched at the same time at Gamescom 2018, we didn't see the RTX 2060 until CES 2019, which was then followed up by the GTX 1660 Ti, GTX 1660 and GTX 1650. And then, we got yet another refresh in middle of 2019 with the Super cards. With its RTX 30 series, Nvidia likely has a whole cavalcade of graphics cards, that it can likely launch whenever. If AMD undercuts one of the RTX 30 series cards without absolutely dominating it, Nvidia can easily launch a new GPU that can just steal AMD's thunder. So, especially if you have your eyes on the RTX 3070 – and you definitely should – it's super prudent to wait for AMD to show its hand. If Nvidia is going to react to any of AMD's launches, this is the card that will be most significantly impacted. (Image credit: Nvidia) Patience will pay off Even if you're a die-hard Nvidia loyalist, waiting a few months before adopting a new generation of graphics card is just a good idea. Over the course of the first few months of this generation of graphics cards, both AMD and Nvidia will have a bunch of kinks to work out through driver and firmware updates. It's not like your graphics card is going to explode or anything before the drivers mature, but you might get degraded performance and game crashes out of nowhere – on top of some possible visual glitches. But beyond that, we're about to enter into one of the most competitive graphics card battles in years, and until AMD shows its hand, we can't really proclaim Nvidia as the victor yet. And even if Nvidia does win, grabbing a graphics card a few months down the line when availability stabilizes after the early adoption rush and drivers have a chance to mature is just a good idea – mashing F5 on Newegg or Nvidia's store page is never fun, and with the reception Nvidia's announcement had, you can bet that these graphics cards are going to sell out fast. Wait for AMD to make its Big Navi move before you buy the Nvidia RTX 3080
  10. Apple's iMac 2021 may see a big change when it comes to graphics Report claims Apple will ditch AMD in favor of in-house GPU (Image credit: Future) The Apple iMac 2021 may ditch low-end AMD GPUs in favor of an in-house graphics card, according to a new report from Chinese publication Commercial Times. If true, it means that AMD's tech will be replaced by an Apple custom-designed GPU solution, at least partially, mirroring what we saw when Apple dropped Intel CPUs in favor of its own Apple Silicon. The report claims that this will happen in the second half of 2021, which seems to be timed with Apple's first ARM-based iMac launch. The in-house Apple GPU, codenamed ‘Lifuka’, is expected to be made using TSMC's 5nm manufacturing process – much like the upcoming A14X processor – with the aim to deliver higher computing performance at a fraction of the power required by other GPUs – presumably AMD's solutions. The report also notes that the so-called ‘Lifuka’ GPU uses Tile-Based Deferred Rendering (TBDR) technology licensed from Imagination Technology. This is the same technology used in Imagination’s PowerVR GPUs, that allows application developers to write more powerful professional application software and game software. It’s worth noting that while these GPUs will likely replace the lower-end discrete GPUs from AMD, more powerful iMacs – like the powerful iMac Pro workstation – look set to continue to use AMD graphics. The Commercial Times report also adds weight to recent rumors that Apple’s first ARM-based Mac will arrive in the form of a super-lightweight 12-inch MacBook that will launch by the end of the year. This will pack Apple's incoming A14X Bionic CPU, codenamed 'Tonga', which will also power 2021's MacBooks and iPads. According to recent speculation, this ARM-based CPU will equip the 12-inch MacBook – which will be making a comeback after Apple ditched the model from its lineup last year – with a battery life of 15 to 20 hours. Apple's iMac 2021 may see a big change when it comes to graphics
  11. Intel begins shipping its Xe-based Server GPU Today, Intel announced that it's now shipping its first discrete graphics card aimed at data centers, which it's simply calling the Intel Server GPU. If you've been following along, Intel gave us a thorough look at its plans for the Intel Xe graphics architecture back in August, and it included a server GPU based on it. More specifically, the Intel Server GPU is based on the Xe-LP (low power) microarchitecture, which is the same one used in the integrated graphics in 11th-generation Tiger Lake processors and the recently-released Iris Xe MAX discrete graphics. As the name suggests, it's meant to be power efficient, and it has a 128-bit pipeline and 8GB of onboard memory. The Intel Server GPU is touted as a "high-density, low-latency" solution for Android cloud gaming experiences, while also being more affordable than competing solutions from Nvidia, based on Intel's estimates. The GPU can be combined with Intel's scalable Xeon processors, allowing customers to scale the graphics capabilities without increasing the server count itself. Intel also revealed the H3C XG310 graphics card, which packs four Intel Server GPUs, and the company claims a two-card system could handle more than 100 Android cloud gaming users at a time, with up to 160 users depending on the system configuration and game. Tencent's Arena of Valor and Honor of Kings managed over 100 users per system on the company's own streaming service in China. Aside from that, Intel also revealed today that its new oneAPI will see a Gold release in December, with toolkits available for free locally and on Intel DevCloud, along with a paid version that includes international support from Intel engineers. Since oneAPI is meant to be a unified programming model, Parallel Studio XE and System Studio will immediately transition to oneAPI products. Intel also said that it plans to add support for its datacenter products in Intel Graphics Software, as it moves towards offering a scalable architecture with Xe. Currently, its software already supports integrated Iris Xe graphics in Tiger Lake processors and the Iris Xe MAX GPU. Intel begins shipping its Xe-based Server GPU
  12. Intel’s high-end gaming GPU sparks into life, and it’s rumored to challenge Nvidia’s RTX 3070 Take that RTX 3070 speculation with a hefty dose of condiments, though (Image credit: Shutterstock.com / Alexander Tolstykh) Intel has flipped the switch and officially powered on its high-end discrete Xe graphics card known as DG2. The news came from CEO Bob Swan in an Intel earnings call following its latest financial results, as reported by Seeking Alpha: “Our first discrete GPU DG1 is shipping now and will be in systems for multiple OEMs later in Q4. We also powered on our next-generation GPU for client DG2. Based on our Xe high-performance gaming architecture, this product will take our discrete graphics capability up the stack into the enthusiast segment.” In other words, this is Intel’s heavyweight gaming (Xe-HPG) card, as opposed to a mainstream model, and it’s now in a working state – but obviously with a lot more development to go yet. The DG1 card, which as mentioned is shipping now, was first powered on in this manner back in October 2019, to give you some context. So we could be seeing DG2 at this stage next year, possibly. Of course, the other big news is that DG1 will be in products on shelves later in 2020. Known as Intel Iris Xe Max, this is a discrete graphics card for laptops, and we’ve already seen it announced by some manufacturers – for example, Acer’s Swift 3x will come with a Tiger Lake mobile CPU and Iris Xe Max graphics. Needless to say, we can’t wait to see how that GPU performs in notebooks, but more eyes are likely to be on DG2, and how that might disrupt Nvidia and AMD’s duopoly when it comes to discrete graphics cards for PC gaming, hopefully later next year as has been previously rumored. Spotted in specs It would seem that Intel is already providing some idea of specs for Xe-HPG cards, given that the DG2 GPU recently popped up in system requirements (on Steam) for horror game Amnesia: Rebirth as one of the recommended graphics solutions. So hopefully that’s a good sign when it comes to the development timeframe for Intel’s DG2 card. PC Gamer, which spotted Swan’s announcement, points out that elsewhere on the GPU grapevine, leaker Moore’s Law is Dead has claimed on YouTube that DG2 could be pitched at around the same performance level as an RTX 3070 – but take that with a massive fistful of salt or two. What we do know for sure is that DG2 will support hardware accelerated ray tracing, and that it won’t be built by Intel itself, but by a third-party fab. Intel’s high-end gaming GPU sparks into life, and it’s rumored to challenge Nvidia’s RTX 3070
  13. The AchieVer

    AMD Has a New Very Fancy GPU

    Photo: Alex Cranz (Gizmodo) Usually, the biggest announcements at CES are over with by the end of the first day, but during its second-day keynote AMD CEO Lisa Su announced a new GPU, the AMD Radeon VII. According to Su, it is the very first 7nm graphics card available to consumers. While Nvidia is leaning on the eye candy of ray tracing, AMD is banking on the hype of a GPU with a smaller die process. The last generation Vega GPU was based on a 14nm process. This is half that size. A smaller die almost always means an increase in performance—usually while maintaing the same power efficiency or improving. AMD is bragging about its die, because, notably, the die sizes have been in the news a lot lately with Intel promising a 10nm CPU (and repeatedly failing to deliver) and Apple crowing about it’s 7nm processor for what felt like half o the iPhone XS keynote. Nvidia’s latest GPUs, the RTX 2-series, is based on a 12nm process. So theoretically the AMD GPU could be faster in games (provided you can do without the ray tracing), but GPU performance is also heavily informed by the software it operates with, and AMD’s software has frequently lagged behind Nvidia’s. Which is why Su took time to talk up AMD’s investment in better software. She also mentioned the kind of memory the AMD Radeon VII to presumably better mark it apart from Nvidia. The Radeon VII will come with 16GB of second-generation high-bandwidth memory (HBM2) with a claimed bandwidth of 1TB. Nvidia’s 2080 8GB of GDDR6 memory with a bandwidth of approximately 448GB per second and the previous generation AMD GPU, the RX Vega 64 had 8GB of HBM2 with a bandwidth of 483.8GB a second. That’s half the memory at half the potential speed. What does that all actually mean? Photo: Alex Cranz (Gizmodo) It means, according to Su, better performance at the same power draw as the previous top of the line Vega GPU (she made no mention of the 2080). She cited about 25 percent improved performance on average. Su claimed that the Radeon VII saw 35 percent better performance in Battlefield V at 4K and the highest settings, and 25 percent improvement in Fortnite. She claimed the performance extends to non-gaming applications as well, with approximately 30-percent improved performance in apps like Photoshop and Blender, and a whopping 62-percent improved performance across other OpenCL apps. Notably absent from the Radeon VII announcement is any mention of ray tracing, the slick feature Nvidia is touting in its GPUs. But Radeon VII will cost $700 when it’s available February 7. That’s at least $300 less than Nvidia’s top GPU, though twice the price of Nvidia’s cheapest ray tracing card, the just announced RTX 2060. Can it possibly be worth it? We’ll know more when we try our own Radeon VII card for a review in the coming weeks Source
  14. Intel looking to tackle Ryzen 3 with cheaper, GPU-less chips? With the launch of AMD's hotly-anticipated Ryzen 3rd Generation processors just around the corner – the new CPUs are expected to be officially unveiled this Wednesday, January 9 at AMD's CES 2019 conference – Intel has today used its CES event to finally announce it's adding six more 9th-gen Core processors, ranging from Core i3 to Core i9, set to release soon. The new processors join the company's three existing 'flagship' 9th-generation desktop chips, which launched in October last year – the Core i5-9600K, i7-9700K and i9-9900K – as well as the 9th-generation X-series for HEDT systems. Intel didn't officially announce full details of the new processors, but we've been able to dig up information on all six of them via some URL experimentation in Intel's ARK product database: Intel Core i3-9350KF: 4-cores, 4-threads, no integrated graphics, clocked at 4.0GHz to 4.6GHz Intel Core i5-9400: 6-cores, 6-threads, Intel UHD Graphics 630, clocked at 2.9GHz to 4.1GHz Intel Core i5-9400F: 6-cores, 6-threads, no integrated graphics, clocked at 2.9GHz to 4.1GHz Intel Core i5-9600KF: 6-cores, 6-threads, no integrated graphics, clocked at 3.7GHz to 4.1GHz Intel Core i7-9700KF: 8-cores, 8-threads, no integrated graphics, clocked at 3.6GHz to 4.9GHz Intel Core i9-9900KF: 8-cores, 16-threads, no integrated graphics, clocked at 3.6GHz to 5.0GHz What's perhaps most intriguing about them is that five of the six new chips appear to be part of a brand new F-series of processors, which have removed (or most likely disabled) the integrated graphics chip that almost every mainstream Intel processor now includes. That may be an attempt to reduce costs (as it will allow the chip-maker to sell CPUs with non-functional GPUs) but it will likely also mean these processors run cooler and use less power – and they could be better for overclocking as a result. Somewhat surprisingly, Intel didn't expressly mention the new F-series at its press conference. If these new GPU-less processors do come at a reduced price, they may also be aimed at tackling AMD's Ryzen family of processors, which largely offer better bang for buck than their Intel equivalents. With many mid-range and higher-end PCs still coming equipped with a dedicated graphics card, Intel's integrated chips often go to waste, so offering a range of cheaper, GPU-less processors may help Intel win back some of the value-oriented market segment that it's recently been losing to AMD. Source
  15. Over the weekend, Nvidia announced that it has trained an AI – which you can test out yourself – to put the expression of one animal realistically onto the face of another. With Nvidia's new GANimal app, you can put the smile of your pooch onto the face of a lion, tiger, or bear. In fact, this app can recreate the expression of any animal on the face of any other creature. To accomplish this, the technology company trained an AI using generative adversarial networks, "an emerging AI technique that pits one neural network against another". The network can translate the image onto a slew of target animals – even those it has never seen before. Instead of having to feed the network several images of your dog, it can perform this task with just one input image, which makes the process simpler than ever. Users can try it themselves and put their pet's expression on animals like an Arctic fox, American black bear or lynx. According to the company, this type of technology could potentially be used in filmmaking, to not only alter animals' expressions, but also to map their movements and recreate them on leopards or jaguars. The GANimal app is the most recent step in the company's lead computer-vision researcher Ming-yu Liu's goal to "code human-like imagination into neural networks". The tool uses the same kind of network as the one behind GauGAN, a technology that turned simple doodles into photorealistic landscapes, which users can also try out for themselves. Source: Nvidia trained AI to put your pup's smile on a lion (via The Star Online)
  16. Today, at the 5G Mobile World Conference, Nvidia co-founder and CEO Jensen Huang, announced Nvidia Jarvis, a multi-modal AI software development kit, that combines speech, vision, and other sensors in one AI system. Here's a YouTube video of the presentation: As stated before, Jarvis is the company's attempt to process multiple inputs from different sensors simultaneously. The wisdom behind this approach is that it will help build context for accurately predicting and generating responses in conversation-based AI applications. To preface this, Nvidia exemplified situations where this might help on its blog post: In Jarvis, Nvidia has included modules that can be tweaked according to the user's requirements. For vision, Jarvis has modules for person detection and tracking, detection of gestures, lip activity, gaze, and body pose. While for speech, the system has sentiment analysis, dialog modeling, domain and intent, and entity classification. For integration into the system, fusion algorithms have been employed to synchronize the working of these models. Moreover, the firm claims that Jarvis-based applications work best when used in conjunction with Nvidia Neural Modules (NeMo), which is a framework-agnostic toolkit for creating AI applications built around neural modules. For cloud-based applications, services developed using Jarvis can be deployed using the EGX platform, which Nvidia is touting as the world's first edge supercomputer. For edge and Internet of Things use cases, Jarvis runs on the Nvidia EGX stack, which is compatible with a large swath of Kubernetes infrastructure available today. Jarvis is now open for early access. If you are interested, you can log in to your Nvidia account and sign up for early access to it here. Source: Nvidia Jarvis—a multi-modal AI SDK—fuses speech, vision, and other sensors into one system (via Neowin)
  17. Deep learning and its applications have grown in recent years. Recently, researchers from ETH Zurich used the technique to study dark matter in an industry first. Now, a team working with the University of California, Berkeley and the University of California, San Francisco (UCSF) School of Medicine have trained a convolutional neural network dubbed "PatchFCN" that detects brain hemorrhages with remarkable accuracy. In a paper titled "Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning", the team claims that: The team achieved an accuracy of 99 percent, which is the highest recorded accuracy to date for detecting brain hemorrhages. In some cases, the neural network's performance eclipsed even that of seasoned radiologists: PatchFCN was trained on a dataset of more than 4,000 CT scans from UCSF-affiliated hospitals using Nvidia V100 Tensor Core GPUs and Amazon Web Services. The training and analysis were done in a novel way whereby the team divided up the CT scans into segments that were each subsequently analyzed by the model. The team then experimented with the segment size to achieve the best results to increase the model's accuracy. Furthermore, according to the researchers, each picture can be analyzed within seconds by their trained model. After analysis, the model, in addition to passing a verdict on the existence of a brain hemorrhage, also provides a detailed tracing and measurement of each hemorrhage. In the context of a hospital, this can be a vital asset. PatchFCN will not only improve throughput but will also relieve pressure off of radiologists’, thereby improving their efficiency and productivity, the team believes. For more information and the specifics of the study, you can refer to the paper published here. Source: 1. Neural network system has achieved remarkable accuracy in detecting brain hemorrhages (via Neowin) - main article 2. Deep Learning Detects Brain Hemorrhages with Accuracy Rivaling Experts (via Nvidia Blog) - supporting reference 3. Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning (via PNAS) - academic research paper
  18. AMD launches Radeon Pro W5700, the first 7nm GPU for workstations Today, in addition to launching the Athlon 3000G processor, AMD has announced the world's first 7nm GPU for workstations, the Radeon Pro W5700. This new GPU is the first in the Radeon Pro W5000 series, and it's based on the company's new RDNA architecture, which promises up to 25% more performance per clock compared to the previous GCN architecture. The Radeon Pro W5700 also promises up to 41% more performance per watt compared to the GCN-based Radeon Pro WX 8200. It also claims to be 18% more power-efficient than Nvidia's Quadro RTX 4000 GPU. AMD also boasts better multitasking capabilities when the CPU is under load, promising up to 5.6 times the workflow performance compared to the same Nvidia card. The Radeon Pro W5700 is also the first workstation GPU to support PCIe 4.0 for additional bandwidth and it also comes with 8GB of GDDR6 memory. Additionally, it's the first GPU of its kind to come with a USB Type-C port to support the growing number of monitors that use it for video input. Here's a quick rundown of the specs: GPU Compute units TFLOPS Memory (Bandwidth) Memory interface Display outputs Radeon Pro W5700 36 Up to 8.89 8GB GDDR6 (448GB/s) 256-bit 6 The AMD Radeon Pro W5700 is available today in the North America, EMEA, and Asia Pacific regions, starting at $799. Source: AMD launches Radeon Pro W5700, the first 7nm GPU for workstations (Neowin)
  19. Windows 10 20H1 will allow users to monitor GPU temperature with ease After the release of Windows 10 November Update last month, Microsoft went back to work on 20H1 which is scheduled to release in the spring of 2020. Since Microsoft made it clear that November update would be an incremental update, Windows 10 users have high hopes from 20H1 as it’s slated to bring a plethora of new features. While we expect to see some major feature additions to the OS, Microsoft is also working on small but important changes. One of those is the ability to monitor GPU temperatures with ease. While Windows 10 does show detailed GPU usage, it doesn’t show the actual temperature and because of that, you will need third-party apps to monitor the temperature. However, with the release of Windows 10 Build 18963 Microsoft has added a GPU temperature monitor to the Task Manager. This feature is targetted at people using dedicated Graphics Cards which tend to heat up a lot compared to integrated GPUs from Intel and AMD. Microsoft released the Build 18963 quite a while back but the feature has been working perfectly fine on Windows 10 Insider Preview and should be added to final 20H1 update. We tested the feature on one of our test machines and unfortunately, the feature only records the present temperature. So if you’re after historic data then you will have to rely on third-party apps. That said, if you just want to monitor temperature at regular intervals then you can do so from the Task Manager soon. Source: Windows 10 20H1 will allow users to monitor GPU temperature with ease (MSPoweruser)
  20. Nvidia’s next Tesla GPUs could be 75% faster – good news for future GeForce cards Big Red 200 supercomputer upgrade hints at equally big things for consumer GPUs (Image credit: TechRadar) Big Red 200, a new supercomputer at Indiana University, is now up and running, and later this year will be upgraded to make use of Nvidia’s next-gen GPUs, which will potentially be up to 75% faster than current Tesla graphics solutions. This is according to a report from The Next Platform, which spoke to Brad Wheeler, VP for IT and CIO at Indiana University, airing the claim that Nvidia’s next-gen Tesla graphics solutions – which will be deployed as a ‘phase two’ upgrade for Big Red 200 in the summer – will be around 70% to 75% faster than current offerings. That’s a huge leap in performance, of course, and while you might think that it’s not particularly relevant to the average PC user – with these being heavyweight GPUs in a massive supercomputer – remember that the technology Nvidia uses here could trickle down to its consumer GeForce offerings. And an (up to) 75% performance increase in Tesla lends more credence to the (admittedly fairly wild) previous rumor that has been floated, which contends that Nvidia’s next-gen GeForce graphics cards could benefit from a 50% performance uplift (albeit this could, potentially, only pertain to ray tracing scenarios – although this is all just up-in-the-air theorizing, of course). Speculation has it that Nvidia’s next-gen Tesla GPUs might be unveiled at the firm’s GPU Technology Conference in March (this isn’t the first time we’ve heard that Ampere graphics cards will be revealed at GTC in San Jose – although other corners of the rumor mill seem to believe that this could mean a consumer GeForce card, rather than a data center offering). An unveiling at GTC in March might be ahead of a summer launch for the new heavyweight cards, which would line up with the proposed Big Red 200 upgrade time frame as mentioned. As ever, we have to treat any speculation with a great deal of caution, but nonetheless, this represents a potentially exciting glimpse of how powerful Nvidia’s next-gen graphics tech could be in terms of heavyweight computing – hinting at similar things for consumer GPUs. Epyc beast Big Red 200 is a Cray Shasta supercomputer, and it launched with 672 dual-socket nodes carrying AMD’s Epyc 7742 (2nd-gen server) 64-core processors. In the phase two upgrade, further Epyc chips will be added to the machine, along with the aforementioned next-gen Tesla GPUs. The University decided to take this two-stage deployment approach when it discovered that if it waited a bit longer, it could benefit from Nvidia’s next-gen products, rather than going with Nvidia V100 GPUs as was originally planned. With those V100 cards, Big Red 200 would have been capable of a peak performance level in the order of 5.9 petaflops, but using the newer GPUs, the supercomputer should instead see performance up to 8 petaflops. Source: Nvidia’s next Tesla GPUs could be 75% faster – good news for future GeForce cards (TechRadar)
  21. AMD has sold more GPUs than Nvidia, according to this analyst report Team Green is falling behind, though not by much (Image credit: Future) Team red is on fire. It seems like AMD’s winning streak won’t likely end anytime soon. After leaked figures from Mindfactory revealed that AMD’s Ryzen CPU sales are destroying that of Intel’s, the latest report from Jon Peddie Research is now showing that the Santa Clara company is winning in the GPU market as well. According JPS’s Market Watch Q4 2019 report, AMD saw a 22.6% increase in overall GPU shipments in Q4 2019. This means that AMD now has 19% share of the GPU market, which is a 3% increase from Q3, while rivals Nvidia and Intel saw 0.97% and 2% drops respectively. That leaves Nvidia with only an 18% share - leaving AMD in the lead between the two. That said, Intel still dominates the market with its integrated and discrete GPUs, taking 63% of the market share in Q4. And, Nvidia is still king of the discrete GPU game, taking 73% of discrete GPU shipments in 2019 over AMD’s 27%. However, the fact that AMD's GPU sales are steadily going up is still great news for the company. AMD's shipments of discrete graphics in particular progressed to 27% of the market total, up from 26% in 2018 and 24% in Q3 2019. With the highly-anticipated “Nvidia killer” Radeon RX 5950 XT just around the corner, those numbers are likely to go higher in 2020. Of course, it’s also entirely possible that Intel’s promising Xe discrete graphics will only perpetuate Team Blue’s dominance, especially in the laptop market. Good news for the GPU market in general It’s not just AMD that’s enjoying the fruits of its labor, however. According to Market Watch, the overall GPU shipments had increased 3.4% from Q3 2019. The overall attach rate of GPUs to PCs was up by 1.8% and the number of desktop graphics add-in boards (AIBs) that use discrete GPUs also saw 12.17% increase in Q4. Considering that GPU shipments have been historically flat in the fourth quarter, this is excellent news for the graphics card industry. JPR President Jon Peddie, even notes that this is “the third consecutive quarter of increased GPU shipments.” It’s not all good news. With the coronavirus epidemic crippling many of China’s factories and thus interrupting the supply chain, Q1 2020 “may show an unusual dip,” says Peddie. However, with “Intel’s entry into the discrete GPU market and a possible fourth entry by an IP company,” 2020 is still going to be an exciting year in graphics card game. Source: AMD has sold more GPUs than Nvidia, according to this analyst report (TechRadar)
  22. AMD confirms ‘Nvidia killer’ graphics card will be out in 2020 Big Navi could show up sooner rather than later this year (Image credit: AMD) AMD’s chief executive has confirmed that a high-end Navi graphics card will be released this year. In a video interview entitled ‘The Bring Up’ posted on YouTube by AMD (see below), Lisa Su noted that people were wondering about Big Navi – said high-end GPU, which has previously been referred to as the ‘Nvidia killer’ in terms of how it will take on the top-end RTX cards. The CEO then said: “I can say you’re going to see Big Navi in 2020.” This is the first concrete confirmation we’ve had that AMD will definitely be unleashing its big graphics firepower this year, although rumors have always pointed to this, and indeed comments that Su made in a recent roundtable Q&A session at CES 2020. At CES, the CEO stressed how important a top-end GPU was to AMD, and said that “you should expect that we will have a high-end Navi, although I don’t usually comment on unannounced products”. The hint was certainly that this GPU would arrive in 2020, but she didn’t actually say that. So at least now we have a confirmation, even if that really isn’t a surprise to anyone who’s been following AMD’s rumored progress in the graphics card arena lately. Battle of the flagships There has been no shortage of speculation around all this, including that the high-end graphics card could be 30% faster than Nvidia’s RTX 2080 Ti (if the unknown GPU which is the subject of that leak is indeed Big Navi, and that’s a fairly sizeable if). Of course, AMD needs to move quickly enough with the release to make sure it isn’t competing against the RTX 3080 Ti (which might be up to 50% faster than its Turing predecessor, so the rumor mill reckons – although that might be just with ray tracing). Nvidia’s next-gen Ampere GPUs are expected to launch in the latter half of 2020, in case you were wondering. Another potential sign that we might see the high-end Navi graphics cards sooner rather than later is that an EEC filing has just been made for the Radeon RX 5950XT. And a GPU with the same name has been filed previously (back in June 2019), indicating that the 5950XT could be the flagship model for 2020. As ever, we need to take such speculation with a good degree of caution, though. Source: AMD confirms ‘Nvidia killer’ graphics card will be out in 2020 (TechRadar)
  23. Leak shows Intel's DG1 Xe discrete GPU dev kits may be ready to be sampled soon At CES 2020, a couple of days ago, Intel demoed its upcoming 10nm+ Tiger Lake CPUs and also teased its Xe DG1 discrete graphics running Destiny 2. With the chip up and running, Intel appears ready to ship development kits of its Xe GPU according to this leaked press deck. The company appears to have named the kit a 'Software Development Vehicle' (SDV) and these will be sampling to independent software vendors (ISVs) worldwide. The design of the SDV is aesthetically pleasing with stylish grooves on the top and an Xe-branded backplate at the bottom. It is a single fan card with no apparent external power connector hinting at low power requirement for this particular design. It's been known since Supercomputing 2019 that Intel plans to scale its Xe architecture through the entire spectrum of the graphics market, from high end HPC needs down to low power(LP) mobile use cases, starting with Tiger Lake. Intel seems to have reiterated on that fact and has only added a nomenclature denoting each tier of performance. To sum up, Intel's plans with its Xe architecture seem grand as the company looks to take on two behemoths in the GPU market. With time, we will know how Intel has managed to measure up. via Videocardz Source: Leak shows Intel's DG1 Xe discrete GPU dev kits may be ready to be sampled soon (Neowin)
  24. Best cheap graphics cards 2020: the top graphics cards on a budget Get the best cheap graphics cards for your dollar (Image credit: Future) Ah, the graphics card. Whether you're a dedicated esports enthusiast or a creative working with ultrahigh definition visuals, the graphics card is possibly the most important component in your computer. Unfortunately, it's also priced as such, making the search for the best cheap graphics card a difficult one. Still, there are options out there for those without a whole lot of money to spend. Between Nvidia and AMD, there are several quality Geforce and Radeon graphics cards out there that technically count as affordable if you know where to look. Fortunately, we do, and we're here to help you find the best cheap graphics card to fit your need – and your budget. Best cheap graphics card at a glance AMD Radeon RX 5700 Nvidia GeForce GTX 1650 Super AMD Radeon RX 5600 XT Gigabyte GeForce GTX 1660 OC 6G AMD Radeon RX 5500 XT (Image credit: AMD) 1. AMD Radeon RX 5700 AMD's best cheap graphics card all around Stream Processors: 2,304 | Core base clock: 1,465 MHz | Core boost clock: 1, 725 MHz | Memory: 8 GB GDDR6 | Memory clock: 14 Gbps | Power connectors: 1 x 8-pin and 1x 6-pin | Outputs: 1 x DisplayPort 1.4 DSC, 1 x HDMI with 4K60 support + Excellent performance + 1440p gaming at Ultra/Max setting - No ray-tracing - Just barely "affordable" When it comes to "budget" AMD graphics cards, the Radeon RX 5700 is about as good as it gets. Capable of delivering 1440p gaming on Ultra or Max settings, you're going to be hard pressed to find a better mid-range graphics card at this price point. Some sacrifices had to be made for affordability, however – and it is just barely affordable. The Radeon RX 5700 doesn't come with ray tracing, so the budget minded will have to wait a little while longer before that feature makes it into even the best cheap graphics cards on the market. Maybe next year. Read the full review: AMD Radeon RX 5700 (Image credit: Nvidia) 2. Nvidia GeForce GTX 1660 Super A powerhouse for 1080p gaming Stream Processors: 1408 | Core base clock: 1.520 MHz | Core boost clock: 1,785 MHz | Memory: 6 GB GDDR6 | Memory clock: 14 Gbps | Power connectors: 1x 6-pin | Outputs: 1 x DisplayPort 1.4a, 1 x HDMI 2.0b, DL-DVI + Excellent 1080p performance + Affordable - No RT Cores - Limited Ports Replacing the Nvidia GeForce GTX 1660, the Nvidia GeForce GTX 1660 Super is absolutely one of the best cheap graphics cards on the market right now. It is able to crank out nearly 80 FPS on Middle Earth: Shadow of War on Ultra graphics settings at 1080p and even managing a decent 54 FPS at 1440p. This is incredible considering it comes in at under $250 (£200, AU$400). Still, the Nvidia GeForce GTX 1660 Super does have its drawbacks. It still goes with a DVI port in lieu of a second HDMI port (or even a USB-C) and while it does have a DisplayPort, you won't be running several displays with this card. It also lacks ray tracing cores, but that's not surprising given that this is a budget graphics card. Still, you're going to be hard-pressed to find another graphics card that's as good as the Nvidia GeForce GTX 1660 Super for the price. Read our full review: Nvidia GeForce GTX 1660 Super (Image credit: AMD) 3. AMD Radeon RX 5600 XT Raw performance at a budget price Stream Processors: 2,304 | Core base clock: 1,355 MHz | Core boost clock: 1,560 MHz | Memory: 6 GB GDDR6 | Memory clock: 14 Gbps | Power connectors: 1x 8-pin | Outputs: 1 x DisplayPort 1.4, 1 x HDMI 2.0 + Incredible performance + Competitive against more expensive cards - No ray-tracing - Limited ports With all the cheap 1080p graphics cards out there, it's especially hard for gamers to find the right card for their systems and budget. In a market flooded with Nvidia offerings, the AMD Radeon RX 5600 XT thankfully stands out as one of the best cheap graphics cards in its class, and can even bloody the noses of some of the more expensive mid-range cards in terms of raw performance. Still, it is an AMD card, so it doesn't have ray tracing, and for a card that costs nearly $300 (about £250, AU$470), it's worth asking whether it's worth paying just a bit more for the Nvidia GeForce 2060 which does have ray-tracing capabilities. If you can get by without the ray-tracing though, the AMD Radeon RX 5600 XT is possibly the best cheap graphics card you're going to find. Read our full review: AMD Radeon RX 5600 XT (Image credit: Gigabyte) 4. Gigabyte GeForce GTX 1660 OC 6G High-performance, low price Stream Processors: 1,408 | Core base clock: 1,530 MHz | Memory: 6 GB GDDR5 | Memory clock: 8 Gbps | Power connectors: 1x 8-pin | Outputs: 3 x DisplayPort 1.4, 1 x HDMI 2.0 + Turing architecture + Excellent performance - Only GDDR5 While Nvidia GeForce graphics cards tend to shine brightest on the high-end, they don't always pack the same performance-to-price value as a solid AMD offering. Fortunately, not every GeForce card is out of reach, and the Gigabyte GeForce GTX 1660 OC 6G brings the latest Nvidia Turing architecture to the budget-minded consumer. With excellent 1080p gaming performance and even some solid 1440p gaming with the right settings, the GeForce GTX 1660 OC is one of the best cheap graphics card options for gamers who want a little bit more from their graphics card without paying a lofty premium. Read our full review: Gigabyte GeForce GTX 1660 OC 6G (Image credit: AMD) 5. AMD Radeon RX 5500 XT AMD Navi at a budget price point Stream Processors: 1,408 | Core base clock: 1,717 MHz | Core boost clock: 1,845 MHz | Memory: 8 GB GDDR6 | Memory clock: 14 Gbps | Power connectors: 1x 8-pincompu | Outputs: 1 x DisplayPort 1.4 with DSC, 1 x HDMI with 4K60 + Very affordable + Solid 1080p gaming performance - Struggles with most-demanding 1080p games If you're looking for solid 1080p gaming without spending a ton, you can't go wrong with the AMD Radeon RX 5500 XT. It definitely doesn't aim beyond its reach and it sticks to turning out quality 1080p performance, edging out the rival GeForce GTX 1660 when factoring in the price. Capable of pumping out 60 FPS on most AAA titles, the AMD Radeon RX 5500 XT will struggle with a Metro Exodus on high settings, much less ultra. But for most games, it'll be tough to find a better graphics card at this price point. Source: Best cheap graphics cards 2020: the top graphics cards on a budget (TechRadar)
  25. Starts early as it comes in 2020 Intel has now officially started to tease its upcoming dedicated GPU, codename Arctic Sound, which is now confirmed for 2020. The teaser does not come as a surprise considering that Chris Hook, an ex-AMD marketing veteran, is now at Intel, pushing its marketing machinery into motion. The video teaser is also the first piece of information published on the new Intel Graphics Twitter account, and we suspect that it won't be the last. In a response to Charlie Demerjian, Editor-in-Chief of the Semiaccurate.com site, which has been a thorn in the side of many companies, especially Intel, Chris Hook noted that it "will take time and effort to be the first successful entrant into the dGPU segment in 25 years, but we have some incredible talent at Intel, and above all, a passion for discrete graphics". In case you somehow missed it, Hook was not the only guy switching to the blue camp as Intel managed to snatch Raja Koduri, who was Senior VP and Chief Architect at AMD, and the guy that is behind Intel's future push into the graphics market. Earlier rumors suggested that Intel is working on some sort of scalable GPU, which will be able to cover desktop, mobile, and even workstation markets. The video also confirms that we won't see it before 2020, but nevertheless, competition is always good for the consumer, so we are looking forward to it. View: Original Article
×
×
  • Create New...