Jump to content

Search the Community

Showing results for tags 'intel'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

  1. The spectacular performance of Apple’s M1 Max has hit morale in the x86 world, where it was always guaranteed that you can buy a Windows laptop with more power than whatever Apple could produce, especially after it transitioned to ARM. With the M1 Max this was clearly no longer the case, which may explain why Intel appears to have leaked its upcoming Alder Lake Mobile processor to hardware site wccftech, who promptly benchmarked it. Fortunately for Intel and the rest of the x86 worked, the Intel Core i9 12900HK mobility CPU benchmarked faster than the Apple M1 Max, Intel 11980HK and AMD 5980HX. The Intel 12900HK Alder Lake Mobility processor’s p-cores helps it to an 1851 single-core score, topping the 1,785 of the 5 nm-based Apple M1 Max chip. It is also 14.5% faster than the older Core i9 11980HK’s 1,616 single-core score. Intel Core i9 12900HK mobility CPU also unexpectedly beat the Apple M1 Max when it comes to the Geekbench multi-core benchmark, scoring 13,256 vs 12,753 for the Apple processor. It is also 45% faster than the Intel 11980HK which only scored 9,149 points. The Alder Lake processor was tested on Windows 11, which of course has new optimizations for the Alder Lake chip architecture with a big.SMALL and highly efficient “e-cores” and support for Intel’s new Thread Director technology. It remains likely that an Apple M1 Max laptop will last a lot longer off the power cord, given its much lower power consumption and performance per watt, but it seems that if you asked which is the most powerful laptop CPU, the answer will only be the Apple M1 Max chip for a few months, with the Alder Lake Mobile CPU expected to ship in early 2022. Intel’s Alder Lake Mobile processor gets benchmarked, handily beats the Apple M1 Max
  2. The Intel® Battery Life Diagnostic Tool helps IT managers monitor and diagnose the battery life impact of standard IT software builds on systems that support Intel vPro® Technology. When run on a vPro system, the tool: Estimates expected battery life Provides insight into the system’s ability to reach low power states Highlights the applications, processes, and configuration errors that impact low power efficiency Visit the support page for additional information. What files do I need? The bldt_setup .exe file is the installer for the application. This is the only file you need to run the application. The User Guide contains detailed instructions on using the tool and interpreting the results. The Release Notes explains changes made to the application in the most recent release. The "(Optional) Intel® Battery Life Diagnostic Tool third-party component sources" file contains the source code for third-party, open-source components used in the application. This file is provided for open source licensing purposes only. You do not need this file to run the application. README Text Files ((optional) intel(r) battery life diagnostic tool third party component sources.7z) Home: https://downloadcenter.intel.com/download/29348/Intel-Battery-Life-Diagnostic-Tool or https://www.intel.com/content/www/us/en/download/19521/29348/intel-battery-life-diagnostic-tool.html Changelog: https://downloadmirror.intel.com/29348/Intel_BLDT_Release_Notes_2.0.0.pdf User Guides (Intel_BLDT_User_Guide_2.0.0.pdf) Download: https://downloadmirror.intel.com/29348/bldt_setup-v2.0.0.exe SHA1: 0E74307A04B5BB41BC75096B3EAA849943CFC722
  3. Intel earlier today released its newest DCH graphics driver version The major highlight of the new driver is the addition of support for H264 and HEVC DX12 video encode on Windows 11. Aside from that, it brings several fixes for bugs in games and other applications. There is also a long list of known issues. The bugs fixed in this release are: Minor graphic anomalies seen in Cyberpunk 2077* (DX12), Hitman 2* (DX12), Wolfenstein: Youngblood* (Vulkan). Minor graphic anomalies seen in Monster Jam Steel Titans 2*, Ark Survival Evolved* (when Intel Sharpening Filter is enabled) on 11th Generation Intel® Core™ Processors with Intel® Iris® Xe graphics. Black screen seen in Rage 2* (Vulkan) (after ALT + TAB) on 11th Generation Intel® Core™ Processors with Intel® Iris® Xe graphics. Intermittent crash or hang seen in Ark: Survival Evolved* (during launch), Star Wars: Squadrons* (during launch), Warframe* (DX12) on Intel® Iris® Xe Discrete graphics. Minor graphic anomalies seen in Euro Truck Simulator 2*, Marvel’s Avengers* (DX12), Metro Exodus* (DX12) on Intel® Iris® Xe Discrete graphics. The unresolved issues that remain are: Intermittent crash or hang may be seen in Call of Duty: Black Ops Cold War* (DX12), Forza Horizon 4* (DX12) (after launch), The Sims 4* (hot plug or unplug of external display). Minor graphic anomalies may be observed in Dead by Daylight*, Death Stranding* (DX12), Forza Horizon 4*(DX12). Microsoft Flight Simulator* (during installation), Red Dead Redemption 2* (DX12), Red Dead Redemption*(Vulkan) (while pressing ESC), Resident Evil Village* (DX12) (while pressing ESC), Shadow of the Tomb Raider* (DX12), The Surge 2* (Vulkan), Tom Clancy’s Ghost Recon Breakpoint*. 11th Generation Intel® Core™ Processors with Intel® Iris® Xe graphics: Crash or hang may be seen in Breathedge*, Deathloop* (DX12), Detroit: Become Human* (Vulkan), FIFA 21*, Moonlight Blade* (DX12), NBA 2K21* (DX12) (when switching resolution from 1920 x 1080 to 1600 x 900 and 1280 x 720), Necromunda:Hired gun* (DX12) (Shadow option other than low), Red Dead Redemption* (Vulkan), The Dark Pictures: Man of Medan* (online mode), Vendetta Online* (Vulkan). 11th Generation Intel® Core™ Processors with Intel® Iris® Xe graphics: Black screen may be seen in Arma 3* (when changing display mode to windowed or full screen). 11th Generation Intel® Core™ Processors with Intel® Iris® Xe graphics: Minor graphic anomalies may be seen in Adobe Premiere Pro 2020 during video playback, Call of the Sea*, Crysis Remastered*, Dark Souls 3*, Elex*, Gears of War Ultimate Edition* (DX12), Gears 5* (DX12), Just Cause 4 *, Mount & Blade II: Bannerlord*. Intel® Iris® Xe Discrete graphics: Intermittent crash or hang may be seen in Conan Exiles* (Low End Laptop Mode” in game settings), Doom Eternal* (Vulkan), Forza Motorsport 6* (DX12), Spyro: Reignited Trilogy*. Intel® Iris® Xe Discrete graphics: Crash or hang may be seen when launching Batman: Arkham City*, Outriders* (DX12). Intel® Iris® Xe Discrete graphics: Minor graphic anomalies may be observed in Assassin’s Creed Valhalla* (DX12), Code Vein*, GRID 2019* (DX12), Rocket League* (when CMAA Enabled). 11th Generation Intel® Core™ Processors with Intel® Iris® Xe graphics: Display is getting blanked out with [email protected] resolution connected via dock. 11th Generation Intel® Core™ Processors with Intel® Iris® Xe graphics: Thunderbolt display may not turn on after HDR is enabled for built-in display. The new driver is compatible with all Intel graphics products starting from the 6th gen processors (Skylake) and newer, as well as Xe-based products. To download it, head over to Intel's official site where you can find the relevant links. There is a separate driver for Kaby Lake G though as it is built with Radeon Vega M inside. You can find details about that driver here. https://www.neowin.net/news/intels-latest-3001009955-driver-brings-additional-video-encode-support-on-windows-11/
  4. ‘I am 100 percent loyal to Apple’ If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. I regret to inform you that Intel has again made a terrible ad to showcase how absolutely not mad it is about Apple switching to its own processors. This one, called “Breaking the Spell,” follows a classic format — people are put in a room and told about features supposedly coming to MacBooks. Then, after they get all excited, the twist! They were actually being told about Windows computers, powered (of course) by Intel. The folks in the ad (which the fine print says are “real people paid for their time and opinions”) are speechless and then clamor to buy or take the laptops, presumably now that they’ve awoken to the possibilities of the PC. It’s an ad format that’s never been great, but the real kicker is that it makes absolutely no sense here — in the intro, Intel claims it’s showing off “new technology” and “innovations” and then proceeds to show people absolutely gobsmacked by tech that is old news by almost any standard. If you’re brave, you can give it a watch below. I know that when it comes to tech news, there can be somewhat of a bubble, with writers and readers thinking that certain things are more well-known to the general public than they are. But really, if you’ve walked into Best Buy’s laptop section at some point in the past decade, you probably would’ve seen some of the things the people in the ad are shell-shocked by: two-in-one laptops that fold to become a tablet have been popular since the early 2010s, Intel’s been pushing laptops with two screens since 2018, and PC gaming was a thing before the original IBM Personal Computer that popularized the term “PC.” To be fair, secret rooms are always pretty cool (unless they’re serial killer dungeons, I guess). Image: Intel As for Intel’s upgradability argument, I agree. While I understand that Apple building RAM into the same chip as the processor probably has some performance benefit, it does bite that you can’t upgrade — especially when M1 laptops start with a paltry 8GB and carry a $200 charge for the 16GB upgrade. The problem is, though, that there’s a high chance that the Windows laptop this ad inspires you to buy will also have unremovable and non-upgradeable memory. While you can find Windows laptops with swappable RAM or SSDs, nowadays, that’s a notable feature rather than an expected one. Perhaps this ad would be at least closer to forgivable if it wasn’t a mashup of another series of ads Intel made featuring “I’m a Mac” actor Justin Long (which people hated) and those Chevy truck commercials (which people also hated). The like to dislike ratio for Intel’s new ad. Lots of people still spellbound, I guess. The ratio for Intel’s “Justin Gets Real: Having Choices” ad. Wait, actually, yes, it would still suck — because Intel’s bragging about things you can find in laptops with chips from its other competitor, AMD. There are 2-in-1s rocking Ryzen chips, and our top gaming laptop for 2021 has both an AMD CPU and a slot to add more RAM. “But what about dual-screen laptops with AMD,” asked probably no one because it’s a niche product category. Asus has you covered. I don’t want this criticism to come off as being done by a butthurt Apple fan who’s, as one person in the ad put it, “100 percent loyal to Apple.” (Who talks like that?) While I mostly use Apple products, I’m also an unabashed old ThinkPad lover who thinks Intel has a point — if you can look past how condescending and embarrassing this ad campaign is. I do wish Apple gave people more choice and more upgradability with its computers, and I think people should be more willing to give other computing platforms a shot if they fit their needs better. But does an ad full of people acting like they’ve never seen a computer convince anyone of that? Intel tries to bash Apple again with a terrible ‘social experiment’ ad
  5. Intel's Loihi processors have electronics that behave a lot like neurons. Mike Davies, director of Intel's Neuromorphic Computing Lab, explains the company's efforts in this area. And with the launch of a new neuromorphic chip this week, he talked Ars through the updates. Despite their name, neural networks are only distantly related to the sorts of things you'd find in a brain. While their organization and the way they transfer data through layers of processing may share some rough similarities to networks of actual neurons, the data and the computations performed on it would look very familiar to a standard CPU. But neural networks aren't the only way that people have tried to take lessons from the nervous system. There's a separate discipline called neuromorphic computing that's based on approximating the behavior of individual neurons in hardware. In neuromorphic hardware, calculations are performed by lots of small units that communicate with each other through bursts of activity called spikes and adjust their behavior based on the spikes they receive from others. On Thursday, Intel released the newest iteration of its neuromorphic hardware, called Loihi. The new release comes with the sorts of things you'd expect from Intel: a better processor and some basic computational enhancements. But it also comes with some fundamental hardware changes that will allow it to run entirely new classes of algorithms. And while Loihi remains a research-focused product for now, Intel is also releasing a compiler that it hopes will drive wider adoption. To make sense out of Loihi and what's new in this version, let's back up and start by looking at a bit of neurobiology, then build up from there. From neurons to computation The foundation of the nervous system is the cell type called a neuron. All neurons share a few common functional features. At one end of the cell is a structure called a dendrite, which you can think of as a receiver. This is where the neuron receives inputs from other cells. Nerve cells also have axons, which act as transmitters, connecting with other cells to pass along signals. The signals take the form of what are called "spikes," which are brief changes in the voltage across the neuron's cell membrane. Spikes travel down axons until they reach the junctions with other cells (called synapses), at which point they're converted to a chemical signal that travels to the nearby dendrite. This chemical signal opens up channels that allow ions to flow into the cell, starting a new spike on the receiving cell. The receiving cell integrates a variety of information—how many spikes it has seen, whether any neurons are signaling that it should be quiet, how active it was in the past, etc.—and uses that to determine its own activity state. Once a threshold is crossed, it'll trigger a spike down its own axons and potentially trigger activity in other cells. Typically, this results in sporadic, randomly spaced spikes of activity when the neuron isn't receiving much input. Once it starts receiving signals, however, it'll switch to an active state and fire off a bunch of spikes in rapid succession. A neuron, with the dendrites (spiky protrusions at top) and part of the axon (long extension at bottom right) visible. NIH How does this process encode and manipulate information? That's an interesting and important question, and one we're only just starting to answer. One of the ways we've gone about answering it was via what has been called theoretical neurobiology (or computational neurobiology). This has involved attempts to build mathematical models that reflected the behavior of nervous systems and neurons in the hope that this would allow us to identify some underlying principles. Neural networks, which focused on the organizational principles of the nervous system, were one of the efforts that came out of this field. Spiking neural networks, which attempt to build up from the behavior of individual neurons, is another. Spiking neural networks can be implemented in software on traditional processors. But it's also possible to implement them through hardware, as Intel is doing with Loihi. The result is a processor very much unlike anything you're likely to be familiar with. Spiking in silicon The previous-generation Loihi chip contains 128 individual cores connected by a communication network. Each of those cores has a large number of individual "neurons," or execution units. Each of these neurons can receive input in the form of spikes from any other neuron—a neighbor in the same core, a unit in a different core on the same chip or from another chip entirely. The neuron integrates the spikes it receives over time and, based on the behavior it's programmed with, uses that to determine when to send spikes of its own to whatever neurons it's connected with. All of the spike signaling happens asynchronously. At set time intervals, embedded x86 cores on the same chip force a synchronization. At that point, the neuron will redo the weights of its various connections—essentially, how much attention to pay to all the individual neurons that send signals to it. Put in terms of an actual neuron, part of the execution unit on the chip acts as a dendrite, processing incoming signals from the communication network based in part on the weight derived from past behavior. A mathematical formula was then used to determine when activity had crossed a critical threshold and to trigger spikes of its own when it does. The "axon" of the execution unit then looks up which other execution units it communicates with, and it sends a spike to each. In the earlier iteration of Loihi, a spike simply carried a single bit of information. A neuron only registered when it received one. Unlike a normal processor, there's no external RAM. Instead, each neuron has a small cache of memory dedicated to its use. This includes the weights it assigns to the inputs from different neurons, a cache of recent activity, and a list of all the other neurons that spikes are sent to. One of the other big differences between neuromorphic chips and traditional processors is energy efficiency, where neuromorphic chips come out well ahead. IBM, which introduced its TrueNorth chip in 2014, was able to get useful work out of it even though it was clocked at a leisurely kiloHertz, and it used less than .0001 percent of the power that would be required to emulate a spiking neural network on traditional processors. Mike Davies, director of Intel's Neuromorphic Computing Lab, said Loihi can beat traditional processors by a factor of 2,000 on some specific workloads. "We're routinely finding 100 times [less energy] for SLAM and other robotic workloads," he added. What’s new in neuromorphics We'll get back to how asynchronous electronic spikes can actually solve useful problems in a bit. First, we'll take a look at what has changed between Loihi (which we'll call "the original processor" for clarity's sake) and Loihi 2. The difference is informative, because Intel has had hardware in the hands of the research community for a few years, and the company was able to incorporate their feedback into the design decisions. So, the differences between the two, in part, reflect what the people who actually use neuromorphic processors have found is holding them back. Some of the changes are the obvious things you'd expect in the transition between two generations of chips. Intel's using a more up-to-date manufacturing process, and it can now fit each core in roughly half the space needed in the original processor. Rather than being able to communicate with separate chips via a two-dimensional grid of connections, Loihi 2 can do so in three dimensions, allowing a stack of processing boards to greatly increase the total number of neurons. The number of embedded processors per chip, which help coordinate all the activity, has gone from three to six, and there are eight times as many neurons per chip. Despite containing thousands of individual neurons, Loihi chips aren't especially large. Intel But there are also some differences that are specific to Loihi's needs. Intel says it has gone through and optimized all the asynchronous hardware, giving Loihi 2 double the performance when updating a neuron's state and boosting the performance of spike generation ten-fold. Other changes are very specific to spiking neural networks. The original processor's spikes, as mentioned above, only carried a single bit of information. In Loihi 2, a spike is an integer, allowing it to carry far more information and to influence how the recipient neuron sends spikes. (This is a case where Loihi 2 might be somewhat less like the neurons it's mimicking in order to perform calculations better.) Another major change is in the part of the processor that evaluates the neuron's state in order to determine whether to send a spike. In the original processor, users could perform a simple bit of math to make that determination. In Loihi 2, they now have access to a simplified programmable pipeline, allowing them to perform comparisons and control the flow of instructions. Intel's Davies told Ars that you can specify these programs down to the per-neuron level, meaning that two neighboring neurons could be running completely different software. Davies also said that the way each neuron handles its internal memory is more flexible. Rather than specific aspects—like the list of neurons spikes should be sent to—having a fixed allocation, there's a pool of memory that can be divided up more dynamically. These changes do far more than let Loihi 2 execute existing algorithms more efficiently; they actually let the chip run algorithms that were a poor fit for the original processor. And that brings us back to the question of how neuromorphic computing gets anything done. From spikes to solutions How do you actually solve problems using something like a Loihi chip? You can make some parallels to quantum computing. There, the problem you want to solve gets converted into a combination of how you configure a set of qubits and the manipulations you perform on them. The rules of the system—the physics, in the case of quantum computing—then determine the final state of the system. That final state can then be read out and translated into a solution. For neuromorphic computing, the problem is set up by configuring the axons, which determine what neurons signal to what targets, as well as the code that determines when a neuron sends spikes. From there, the rules of the system determine how the spiking behavior evolves, either from the initial state or in response to further input. The solution can then be read out by examining the spiking behavior of different neurons. "Computation emerges from the interactions of the neurons," is how Davies put it. Intel provided a concrete example of this in a paper it published back in 2018. The example problem it used is finding a set of features that can be used to approximate the content of an image, in the same way that a series of circles can approximate the head of Mickey Mouse. This can be done on Loihi by assigning each neuron a feature that it represents and then having its spiking activity influenced by whether it recognizes that feature in an image. As things proceed, the neurons signal to each other in a way that tones down the activity of anything that isn't recognizing a feature. The end result of this competition is that the neurons that represent features present in the image will be actively spiking, while those that don't are relatively quiet. This can be read out as a feature list and the process started over again by feeding the system a new image. While it might be faster to reset the whole processor to its initial state before showing a second image, it shouldn't be necessary—the system is dynamic, so changing the input will mean changing the spiking behavior, allowing a new population of neurons to gradually assert itself. Learning on the fly and more This dynamic behavior makes for a contrast with trained neural networks, which are very good at recognizing what they've been trained on but not flexible enough to recognize something they weren't trained on. Davies described work they've done with Loihi to recognize gestures based on video input. He said that it's possible to get the system to recognize new gestures, training it on the fly without altering its ability to recognize gestures it was previously trained on. (This training won't tie the gesture to a specific action; the Loihi system just does the recognition and relies on other hardware to take actions based on that recognition.) Davies says these sorts of abilities have a lot of potential applications in robotics. Mobile robots have to be flexible enough to recognize and adjust to new circumstances when they find themselves facing a new environment. And any robot will see its behavior change as its parts wear down or get dirty, meaning their control systems have to adjust to new performance parameters. Intel Those are the sorts of things that are traditionally associated with AI systems (whether they involve spiking neurons or not). But Davies also said that there are some very different use cases where spiking systems also perform well. One he mentioned was quadratic optimizations, which help with things like managing complex scheduling constraints (think of a nationwide rail system). These can be solved using traditional computers, but the processing resources rise rapidly with the number of constraints. Loihi has shown promising results on finding optimized solutions with a fraction of the computational resources, and Davies said it's flexible enough to be configured to either find the optimal solution or more quickly find a solution that's within 1 percent of the best. (Intriguingly, these are the same types of problems that run well on D-Wave's quantum annealing hardware. Davies said that Los Alamos was preparing a paper comparing the two.) Waiting for the software While spiking neural networks can be very effective at solving these sorts of problems, the challenge has often been finding the people who understand how to use them. It is a very different type of programming and requires an equally different way of thinking about algorithm development. Davies said that most of the people who are currently adept in it come from a theoretical neurobiology background (or are still in the field). So far, this has meant that Intel has mostly pushed Loihi into the research community, something that has limited its ability to sell the processor more widely. But long term, Intel hopes to see Loihi derivatives end up in a broad range of systems, from acting as a co-processor in embedded systems to large Loihi clusters in the data center. For that, however, it will need to be easy for companies to find people who can program for it. To that end, Intel is coupling the release of Loihi 2 with the release of an open source software framework called Lava. "LAVA is meant to help get neuromorphic [programming] to spread to the wider computer science community," Davies told Ars. He went on to say that, in the past, Intel hasn't provided enough abstraction from the inner workings of Loihi. If you wanted to run software on it, you had to understand spiking systems in detail. LAVA is a way of giving people the ability to work with Loihi-based systems without having to know the details. LAVA should be accessible today, and the first Loihi 2 boards will be made available to the research community via a cloud service shortly. A board with a single chip is being made available for evaluation purposes, and it will be followed by an eight-chip system called Kapoho point later this year. Listing image by Aurich Lawson | Getty Images | Intel Intel launches its next-generation neuromorphic processor—so, what’s that again?
  6. The Intel® Extreme Tuning Utility (Intel® XTU) is a software application that allows you to fine-tune the settings of your K- or X-SKU processor. Using its various frequency, voltage, and other controls, you can fully customize your configuration in terms of power consumption and computing performance. Intel XTU is available for download at the Intel® Download Center and can be loaded on any Intel® system with a 2nd generation or newer Intel® Core™ processor and a chipset that supports overclocking. In a joint effort between Intel and HWBOT, we present version 5.1, a fully integrated overclocking software application that allows you to analyze your own configuration, share and find overclock settings from users around the world, and hype your own achievements through social media. Intel XTU and the HWBOT integration aim to facilitate overclocking for beginners and novice users as well as give the advanced overclocking community and its power users a platform to show their overclocking skills and knowledge. Features System Information – Inspect processor, memory, graphics, BIOS, operating system, and motherboard information. Basic Tuning – Simple three-step process: benchmark your starting performance, overclock your processor, and measure your new performance and witness the performance unleashed! Advanced Tuning – If you are an experienced overclocker, we give you all the controls to maximize the untapped performance of your processor, memory, and graphics hardware. Stress Test – Interested in how stable your overclock is? The included stress tests help you test your system to be sure. Benchmarking – Would you like to see how your setup compares? With Intel® XTU, you can benchmark your system and then compare your scores online with HWBOT.org. Application and Profile Pairing – Have you optimized your overclock for a specific application? Use the app-profile pairing feature to apply different overclocking settings to separate applications. Supported OS: • Windows 10 RS3 • Windows 10 RS4 • Windows 10 RS5 • Windows 10 19H1 • Windows 10 19H2 • Windows 10 20H1 • Windows 10 20H2 • Windows 11 21H2 Homepage: https://downloadcenter.intel.com/download/29183/Intel-Extreme-Tuning-Utility-Intel-XTU- Release Notes: https://downloadmirror.intel.com/29183/XTU- Download[With EULA]: https://downloadcenter.intel.com/downloads/eula/29183/Intel-Extreme-Tuning-Utility-Intel-XTU-?httpDown=https%3A%2F%2Fdownloadmirror.intel.com%2F29183%2Feng%2FXTUSetup.exe Download[Without EULA]: https://downloadmirror.intel.com/29183/eng/XTUSetup.exe SHA1: A7174136252A56529499955320C2BBAC0C875E65 This download is valid for the product(s) listed below[check release notes for the current supported products/platforms]:
  7. US retailer Provantage has revealed the prices of Intel's 12th-gen processors and is also now taking orders. The retailer has accepted pre-orders for some Alder Lake CPUs even though they haven't been officially launched yet. The pricing list of the vendor reveals that the new CPUs are not significantly more expensive than their predecessors. Here's a look at the pricing. Model Boxed Price (w/o VAT) Tray Price (w/o VAT) Cores/Threads L3 Cache Size Core i9-12900K $604.89 $605.92 16/24 30MB Core i7-12900KF $578.13 $578.49 16/24 30MB Core i7-12700K $422.17 $420.26 12/20 25MB Core i7-12700KF $395.61 $392.26 12/20 25MB Core i5-12600K $288.77 $283.59 10/16 16MB Core i5-12600KF $261.77 $263.15 10/16 16MB With Alder Lake-S, Intel is moving towards a hybrid Big-Bigger design called Performance Hybrid architecture, where it is marrying its big cores (P-cores) with its smaller efficient cores (E-cores). The company also confirmed that Alder Lake and Windows 11 are built around each other for optimal performance. Although lesser than previously anticipated, the Alder Lake-S processors still carry a premium when compared to Rocket Lake ones. This is due to the new hybrid design and promised better performance. The Core i5-12600K and the Core i5-12600KF at $288.77 and $261.77, respectively, offer quite a good value for money for people wanting to upgrade their old PCs, especially when Windows 11 won't support older processors. Intel has also finally ditched the 14 nm fabrication process in favor of 10 nm for the Alder Lake, which could show major performance improvements and better efficiency. Alder Lake-S pricing revealed by US retailer Provantage
  8. Intel’s new Alder Lake 12th gen processors are expected to offer a significant performance boost for Windows 11 users, but recently leaked pricing suggests this will come at a pretty steep price. A retailer has leaked the pricing of the chipsets, which can be seen below: Model European Price (w/ VAT) US Price (w/o VAT) Cores/Threads L3 Cache Size Core i9-12900K €736/€741 $705 16/24 30MB Core i7-12900KF €704/€708 $674 16/24 30MB Core i7-12700K €524/€520 $495 12/20 25MB Core i7-12700KF €492/€487 $464 12/20 25MB Core i5-12600K €365/€358 $343 10/16 16MB Core i5-12600KF €333/€326 $312 10/16 16MB The prices are somewhat higher than last year’s flagship processors but still cheaper than the AMD Ryzen 9 5950X, which is currently listed at $749/€789 on Amazon. Another expense to bear in mind is that Alder Lake CPUs can only be inserted into new motherboards with an LGA 1700 socket, meaning a new motherboard needs to be added to the cost comparison. Motherboards bearing the processor will go on sale starting on the 19th of November 19, featuring Intel’s LGA1700-based top-end Z690 motherboards. Alder Lake will be Intel’s first performance hybrid architecture that includes a Performance-core and an Efficient-core to provide significant performance across all workload types. Intel also detailed the new Efficient-core and its Performance-core x86 CPU units that will power the Alder Lake. The Efficient-core microarchitecture is designed for throughput efficiency and efficient offloading of background tasks for multitasking. It runs at low voltage and creates headroom to increase frequency and ramp up performance for more demanding workloads. The new Efficient-core will deliver same performance when compared to Skylake core but with 40% less power. The Performance-core microarchitecture is designed for speed, the highest performing CPU core Intel has built. It pushes the limits of low latency and single-threaded application performance and provides a significant boost at high-power efficiency that can better support large applications. Intel claims that this new Performance-core will deliver 19% better performance over 11th gen cores. To make both the cores work seamlessly with the OS, Intel developed Intel Thread Director that will enable the OS to place the right thread on the right core at the right time. Also, Alder Lake will power devices ranging from ultra-portable laptops to enthusiast and commercial desktops. Early benchmarks have shown Windows 11 would be up to 15% faster on Alder Lake than Windows 10. via Techspot Intel’s Alder Lake processors pricing leaked (and its not cheap)
  9. Two weeks ago Intel launched its Alder Lake processor architecture at their Architecture Day 2021 event. The processor is optimised for Windows 11, with the OS being Microsoft’s first to take advantage of Intel’s big.LITTLE architecture and when Intel spoke about its Alder Lake processor and hybrid architecture along with the new Arc GPUs the company said: To enable this high level of coordination that improves real-world performance, Intel has partnered with Microsoft to incorporate this revolutionary capability into the next version of Windows 11. When Microsoft was interviewed on the subject, Mehmet Iyigun – Partner Development Manager, said: Throughout the development cycle of Windows 11, my team worked with our colleagues at Intel to update and optimize our next operating system to make the most of the Performance Hybrid architecture and Thread Director in particular. With Thread Director feedback, the Windows 11 thread scheduler is much smarter in dynamically choosing the most appropriate core based on the workload to get the best power and performance. Prospective Windows 11 users would understandably want their new PCs to have this new processor, and we now believe we know when Alder Lake hardware will hit the street. Wccftech reports that devices bearing the processor will go on sale starting on the 19th of November 19, paired with Intel’s LGA1700-based top-end Z690 motherboards. Alder Lake will be Intel’s first performance hybrid architecture that includes a Performance-core and an Efficient-core to provide significant performance across all workload types. Intel also detailed the new Efficient-core and its Performance-core x86 CPU units that will power the Alder Lake. The Efficient-core microarchitecture is designed for throughput efficiency and efficient offloading of background tasks for multitasking. It runs at low voltage and creates headroom to increase frequency and ramp up performance for more demanding workloads. The new Efficient-core will deliver same performance when compared to Skylake core but with 40% less power. The Performance-core microarchitecture is designed for speed, the highest performing CPU core Intel has built. It pushes the limits of low latency and single-threaded application performance and provides a significant boost at high-power efficiency that can better support large applications. Intel claims that this new Performance-core will deliver 19% better performance over 11th gen cores. To make both the cores work seamlessly with the OS, Intel developed Intel Thread Director that will enable the OS to place the right thread on the right core at the right time. Also, Alder Lake will power devices ranging from ultra-portable laptops to enthusiast and commercial desktops. Early benchmarks have shown Windows 11 would be up to 15% faster on Alder Lake than Windows 10. via Neowin Ship date of Intel’s Alder Lake processors for Windows 11 leak
  10. Intel has been struggling to deliver impressive generational performance improvements over the last few years, and it looks like their new Alder Lake generation will be no exception. At their recent Architecture Day 2021 event Intel promised that their Alder Lake-S processor would deliver 19% instructions per cycle (IPC) improvements over their Rocket Lake S- processor over a wide range of benchmarks. A leaked Adobe After Effects benchmark of an Intel Core i9-12900K processor was discovered on Puget Systems’ PugetBench database by Twitter leaker Benchleaks and the improvements are much less impressive. The results, which range between 1541 and 1575, was on an ASUS ROG STRIX Z690 motherboard, paired with a GeForce RTX 3090. This can be compared to a recent 8C/16T Rocket Lake-S Core i9-11900K benchmark running on a similar system with DDR4-2133 dual-channel 64GB system compared to the 12900K’s DDR5-4800 dual-channel 64GB. In the best case, the Core i9-12900K is only around 11.5% faster than the older processor. Of course, a 10% increase in performance is nothing to sneer at, and battery life improvements related to the big.LITTLE architecture, combined with Windows 11 optimizations, should still make the upgrade more than meaningful. However, it is also clear that there is no need for those with year-old systems to feel they need to upgrade for a performance boost. via Neowin, VideoCardz Alder Lake-S Core i9-12900K leak suggests performance might be underwhelming
  11. Intel has been having a somewhat tough time ever since AMD's Zen came on to the scene. Team Blue is no longer the dominant player in the x86 space and it has been a somewhat equal fight with AMD mostly gaining the upper hand in nearly all of the markets except perhaps notebooks. While the chief architects of Zen were Michael Clark and Suzanne Plummer, renowned engineer Jim Keller also had a huge role in its design philosophy. Since then, Keller has left AMD and is now working at Intel. And according to YouTuber Moore's Law Is Dead (MLID), "Jim Keller is building a Zen 5 killer" at Intel. This Zen 5 killer is what Intel is allegedly working on under the leadership of Keller and the project is called "Royal Core". The Royal Core parts will reportedly make a full debut after 2024 in Nova Lake. AMD is expected to debut Zen 4 next year with Ryzen 7000 (Raphael) processors and Socket AM5 motherboards. So, it will likely be at least a couple of years before we see Zen 5 in action. Not only Zen 5, but Royal Core allegedly hopes to out-compete the Arm and M1 threats that Intel is currently having to face. In fact, apparently, we wouldn't even have to wait till 2025 for that as Intel is already reportedly planning to take on M1 with its upcoming Arrow Lake design. Arrow Lake-P according to YouTuber AdoredTV will be a mobile CPU design with six P-cores, eight E-cores (6+8), and 320 EUs. Intel has been dedicating large portions of its chips' die areas towards integrating more graphics in its mobile SKUs for a while now and the same is the case for Alder Lake architecture too which was announced yesterday. And it seems the practice will continue for Intel's future designs as well. Other sources, like an ex-Redditor with the username "mooreslawisnotdead" say that Arrow Lake will be a valiant effort but won't be able to topple the M1 in power efficiency. In fact, this is the same poster who leaked these alleged next-gen product codenames first, although the Royal Core moniker is new. While the user has since deleted his Reddit account for some reason after posting this, the information was saved by an AnandTech forum member. Raptor Lake (Raptor Cove / Gracemont) Q3'22 / Q4'22 - 10% CPU perf boost and 8/16 configuration puts intel back on par but expect AMD/Apple to refresh their products as well. Meteor Lake (Redwood Cove / Crestmont) Q2'23 - Intel's first true chiplet or tile based design. Different dies built on TSMC / Intel processes. More of a node shrink with single digit performance improvements. AMD will again extend lead with Zen 4+ / 5. Arrow Lake (Lion Cove / Skymont) Q4'23 - Will feature an updated compute tile with 8/32 config for the high end enthusiast products. Might achieve parity with AMD offerings at the time but loses out to Apple in power efficiency. Lunar Lake (Lion Cove / Skymont) Q4'24 - This is the product that will use TSMC 3nm as reported by Nikkei. Big performance jump expected and designed to achieve parity or beat AMD and Apple in both performance and power efficiency. Nova Lake (Panther Cove [tentative]/ Darkmont) 2025 - This will mark the biggest architectural change in cpu architecture since the Core architecture is introduced in 2006. Intel is working to build an entirely new architecture from the ground up much like Ryzen with up to 50% cpu performance improvement from lunar lake. While all this is unconfirmed news which is still at least a couple of years away if authentic, it will be interesting to see what Jim Keller manages to accomplish at Intel as the man has a near-perfect track record so far. Source: MLID (YouTube) via Curmudgeon666 (AnandTech forum) Update: The article was written under the misconception that Jim Keller was still working for Intel after he joined the firm in 2018. However, Keller had departed from the company last year in June. Intel states it was due to personal reasons for which he left, but rumor has it that Keller wasn't very pleased with Intel's inability to go below the 10nm lithography quickly enough which would be essential to carry out his ambitious CPU plans. Jim Keller had reportedly suggested to Intel's management that it outsource some of the chip production at that time which Intel had allegedly refused. Following this, Jim Keller is said to have left the company as a consequence of this disagreement. While this may simply be a rumor, it is interesting that Intel with its IDM 2.0 strategy has decided to move towards this outsourcing path recently. For example, its upcoming discrete gaming Arc GPUs will be built on TSMC's N6 process. So although Keller isn't at Intel anymore, it is possible that the rumored Royal Core project could still become a real thing since Intel will be able to use a supposedly superior process node that Keller allegedly required. In fact, this has happened in the past too. For instance, after joining AMD in 2012 to work on Zen, K12, and Project Skybridge, Keller left the company three years later in 2015; and the Zen-based CPUs didn't come around until first-gen Ryzen hit in 2017. Intel reportedly developing a "Zen 5 killer" with next-gen Royal Core project [Update]
  12. Still light on details, but the Arc GPUs will be a big step up from integrated. Enlarge / Intel disclosed a few more details about the Xe-HPG architecture underpinning its upcoming Arc GPUs. Intel 70 with 51 posters participating Earlier this week, Intel announced Arc, the branding for the new gaming GPUs that will face off with Nvidia's GeForce and AMD's Radeon cards when they launch early in 2022. Today, Intel provided a few additional details on Arc, its underlying Xe-HPG architecture, its graphics drivers, and the "XeSS" upscaling technology that will work with both Intel's GPUs as well as GPUs from Nvidia and AMD. The HPG in Xe-HPG stands for "high-performance gaming," and it builds on the Xe-LP ("low-power") graphics tech included in Intel's 11th-generation Tiger Lake laptop processors and the low-end DG1 dedicated GPU. At a high level, Xe-HPG will support all of the features in DirectX 12 Ultimate, just like Nvidia's RTX 2000- and 3000-series GPUs and AMD's RDNA2-based Radeon RX 6000-series cards. This includes, most notably, support for hardware-accelerated ray tracing, variable rate shading, and mesh shaders. Intel is replacing the "EU" as its base GPU hardware measurement with the "Xe-core," a mix of 16 vector engines, 16 matrix engines, some cache, and a few other elements. Four Xe-cores combine with ray tracing and fixed-function hardware to form a render slice... ... and a render slice is combined with more cache and a memory interface to make a GPU. Alchemist chips will scale up to eight render slices total, though Intel wasn't specific about what configurations it planned to bring to market. Since at least 2006, Intel has talked about its GPU hardware in terms of "execution units," or EUs. The company is doing away with that terminology for Xe-HPG, replacing it instead with the concept of the "Xe-core." Each Xe-core is composed of 16 vector engines and 16 matrix (or XMX) engines, as well as L1 cache and some other hardware. Four Xe-cores combine with ray-tracing units and other fixed-function hardware to form a "render slice," which is the bare minimum any Xe-HPG GPU will need to function (along with L2 cache and a memory interface). Alchemist-based chips can include "up to" eight of these render slices, which implies that we'll see at least a couple of different Arc GPUs with different levels of computing power. But Intel hasn't disclosed any specific hardware configurations, and it also hasn't gone into any detail about clock speeds, the memory interface, or the amount or type of RAM that the first Arc GPUs will include. (Quick terminology reminder: Xe-HPG is the name of the GPU microarchitecture, Alchemist is the codename for the chips based on Xe-HPG, and Arc is the consumer branding you'll see on store shelves.) Intel says that the Intel-specific XMX version of XeSS will provide the best image quality and the shortest render time. But the DP4a version will have the benefit of working on some Nvidia and AMD GPUs, which may make it more appealing to game developers. Intel The XMX hardware in Xe-HPG GPUs will be key to Intel's AI-accelerated upscaling technology, also called Xe Super Sampling or XeSS. Like Nvidia's DLSS, the idea is to upscale lower-resolution images with as little quality loss as possible. Doing so will make it possible to play a game at 4K resolutions without requiring the graphical horsepower necessary to actually render a native 4K image. Intel says that you'll get the best image quality and performance out of XeSS if you're taking advantage of its XMX hardware, but there's a separate version of XeSS that uses DP4a instructions common in modern GPUs from Nvidia and AMD as well. Intel didn't call out specific GPU models, but DP4a instructions are included as far back as the Pascal architecture in Nvidia's still-popular GTX 1060 and 1050 series cards. Nvidia doesn't actually support either of these models with DLSS. Integrated GPUs will be able to benefit from XeSS as well. The SDK for the XMX version of XeSS will be available this month, with the DP4a version following later this year. Intel talked about its upcoming GPU codenames earlier this week when it announced the Arc branding. Alchemist is Xe-HPG, Battlemage will be Xe2-HPG, and Celestial will be Xe3-HPG. Druid's architecture doesn't have a name yet—we might suggest Xe4-HPG. Intel When they're released, Arc GPUs will use the exact same driver package as Intel's integrated GPUs, so improvements made to benefit one architecture can potentially benefit all of them (Intel boasted that it had overhauled its graphics driver in the past year to improve performance in CPU-bound games running on Xe-LP GPUs by between 15 and 80 percent). Finally, the elephant in the room: Intel isn't making Alchemist chips in its own chip factories, at least not now. The company announced that it will use TSMC's 6 nm process for Alchemist chips, meaning that Intel's GPUs will need to fight for capacity with AMD, Apple, and everyone else manufacturing cutting-edge chips at TSMC. "The reason [for using TSMC for Xe-HPG] is simple," wrote Intel SVP Stuart Penn in an accompanying press release. "Just as our designers use the right architecture for the right workload, we also choose the node that best fits that architecture. At this point in time, these foundry nodes are the right choice for our discrete graphics products." Intel provides more details on its Arc GPUs, which will be made by TSMC
  13. At its Architecture Day 2021 event today, Intel outlined details regarding its upcoming CPU and GPU platforms. The company released more information regarding its Arc-branded discrete graphics cards and the accompanying features. Alongside Arc GPUs, it also discussed its upcoming Alder Lake processors that are launching later in the year. Alder Lake is based on Intel's upcoming hybrid Big-Bigger approach called Performance Hybrid architecture and among other things, the company today explained how task assignments will work with the new design. In fact, during its presentation, Intel confirmed that Windows 11 is optimized to work really well with its Alder Lake CPUs, stating: To enable this level of fine-grained coordination for real performance, Intel jointly worked with Microsoft to incorporate this revolutionary capability into upcoming Windows 11 release. Mehmet Iyigun, Partner Development Manager at Microsoft, added to this, saying: Throughout the Windows 11 development cycle, my team has been working with our colleagues at Intel to enlighten and optimize our upcoming OS to take full advantage of the Performance Hybrid architecture and Thread Director in particular. With Thread Director feedback, the Windows 11 thread scheduler is much smarter about dynamically picking the most appropriate core based on the workload to achieve the best power and performance. We have apparently already found evidence of this when a Lakefield Core i7-L16G7 was earlier tested in the Windows 11 environment. For those wondering what the Thread Director is, essentially it is a new technology in Alder Lake that provides feedback to the OS that in turn helps the Windows 11 scheduler to efficiently assign the workloads such that more complex ones composed of vector instructions, among others, go to the Performance Core (P-core), while lighter scalar instructions and background tasks are sent to the Efficient Core (E-core). However, the feedback is dynamic and it can adapt depending on the workload present on each core. To know more about how Thread Director works, you can view the demo by Intel linked in the source below. Source: Intel Newsroom (YouTube) Microsoft and Intel confirm Windows 11 is optimized for Alder Lake's big.LITTLE design
  14. Intel announced its new brand dubbed "Arc" for the upcoming Xe HPG-based discrete graphics cards a few days ago with the first family of Arc GPUs - codenamed Alchemist - launching in Q1 next year. To promote the new brand, the company launched a giveaway contest where lucky winners would win the Arc Graphics Prize Package worth US$1,000 Approximate Retail Value (ARV), which would also have an Arc GPU as part of the prize. Intel graphics head Raja Koduri also hinted on Twitter earlier today that more information on the company's upcoming products, including maybe graphics too, will be shared at its Architecture Day 2021 event. As such, more information has indeed been released regarding the Arc discrete GPUs. First, a roadmap for the Arc GPUs has been laid out which reiterates some of the information Intel had revealed earlier. For the first time today, Intel has also unveiled that Xe HPG brings "1.5x frequency uplift and 1.5x performance/watt improvement compared with Xe LP microarchitecture". Intel's Arc GPUs will be powered by what the company is apparently calling its "Xe-cores". Each such Xe-core will comprise 16 Vector and Matrix Engines, also called Execution Units (EUs). And four of these Xe-cores are grouped together to form a "Render slice". To sum it up: 1 Xe-core => 16 EUs 1 Render slice => 4 Xe-cores => 64 EUs The Render slice would essentially help Intel scale the Arc GPU design from top-end parts all the way down to entry-level products. For example, Intel's flagship 512 EU SKU1 GPU will feature eight such Render slices or 32 Xe-cores (image below) for a total of 512 EUs (image below on the right). The number of ray tracing units is equivalent to the number of Xe-cores which means the flagship 512 EU SKU1 part could feature a total of 32 ray tracing units. Intel, unlike AMD and Nvidia, doesn't have a fancy name for its ray tracing units, yet. The first-gen Alchemist GPUs for laptops and desktops will be built on TSMC's N6 (6nm node) process. Intel's new discrete GPUs will feature the new Xe Matrix Extensions (XMX) and DP4a instructions for AI/ML/DL acceleration. The latter was already present inside Xe LP. These will help power Intel's AI-driven XeSS image upscaling technique with which the company is promising an enormous boost in performance compared to native rendering. Intel says the XeSS SDK will be available to developers this month. You may find more information on the official blog here. Update: Added lithography information for Alchemist GPUs. Intel details Arc architecture, XeSS and more at Architecture Day 2021
  15. At Architecture Day 2021, Intel detailed the next-generation client architecture, code-named “Alder Lake”. Alder Lake will be Intel’s first performance hybrid architecture that includes a Performance-core and an Efficient-core to provide significant performance across all workload types. Intel also detailed the new Efficient-core and its Performance-core x86 CPU units that will power the Alder Lake. The Efficient-core microarchitecture is designed for throughput efficiency and efficient offloading of background tasks for multitasking. It runs at low voltage and creates headroom to increase frequency and ramp up performance for more demanding workloads. The new Efficient-core will deliver same performance when compared to Skylake core but with 40% less power. The Performance-core microarchitecture is designed for speed, the highest performing CPU core Intel has built. It pushes the limits of low latency and single-threaded application performance and provides a significant boost at high-power efficiency that can better support large applications. Intel claims that this new Performance-core will deliver 19% better performance over 11th gen cores. To make both the cores work seamlessly with the OS, Intel developed Intel Thread Director that will enable the OS to place the right thread on the right core at the right time. Also, Alder Lake will power devices ranging from ultra-portable laptops to enthusiast and commercial desktops. Source: Intel Intel details its next-generation client architecture, code-named “Alder Lake”
  16. Leaked specs show performance and efficiency cores in 12th- and 13th-gen CPUs. It looks like big.little CPU design—an architecture that includes both fast, power-hungry cores and slower, more power-efficient cores—is here to stay in the x86_64 world, according to unverified insider information leaked by wccftech and AdoredTV. Intel’s big/little designs enter round two At Intel's 2021 Architecture day, the company confirmed that its upcoming Alder Lake (12th generation) processors will use a mixture of performance and efficiency cores. This brings the company's discontinued 2020 Lakefield design concept firmly into the mainstream. Big.little designs run time-sensitive tasks on bigger, hotter performance cores while running background tasks on slower but much less power-hungry cores. This architecture is near-universal in the ARM world—which now includes Apple M1 Macs as well as Android and iOS phones and tablets—but it's far less common in the x86_64 "traditional computing" world. Intel's Lakefield architecture only lasted one year and only consisted of two products—the i5-L16G7 and i3-L13G4. Market demand was almost nonexistent for Lakefield, likely due to a poor mixture of performance and efficiency cores—both Lakefield CPUs offered only a single performance core, backed by four efficiency cores. Expected 12th- and 13th-gen SKUs Alder Lake—Intel's 12th Core generation, expected in November 2021—will shift the design balance far more heavily to the performance side, with i9-12900K expected to offer eight Golden Cove performance cores and eight Gracemont efficiency cores. The Golden Cove cores support hyperthreading, but the Gracemont cores do not, resulting in lopsided core/thread counts—such as 16 cores/24 threads for the expected i9-12900K. According to the rumors leaked by AdoredTV, Raptor Lake will build on and expand the Alder Lake design, replacing Golden Cove performance cores with Raptor Cove and enhancing the design of the Gracemont efficiency cores. We're also expecting core counts to increase from Alder Lake to Raptor Lake, with an expected 24-core/32-thread i9-13900K due in 2022, if all goes well. Power consumption is one of the key details in most of these rumors, with three major classes of power consumption expected in desktop Alder and Raptor Lake CPUs alike—at 35W, 65W, and 125W TDP. We can take a look at the projected core counts of enthusiast CPUs (all of which are expected to come in at 125W TDP) below, after current-gen Rocket Lake-S CPUs for comparison: Rocket Lake-S (11th-gen desktop CPUs) Model Cores TDP (PL1) i5-11600K 6p/0e 125W i7-11700K 8p/0e 125W i9-11900K 8p/0e 125W Alder Lake-S (12th-gen desktop CPUs) Model Cores TDP (PL1) i5-12600K 6p/4e 125W i7-12700K 8p/4e 125W i9-12900K 8p/8e 125W Raptor Lake-S (13th-gen desktop CPUs) Model Cores TDP (PL1) i5-13600K 6p/8e 125W i7-13700K 8p/8e 125W i9-13900K 8p/16e 125W Both Alder Lake and Raptor Lake are expected to use Intel's 10nm process, which the consumer space has so far seen only in mobile CPUs. Rocket Lake-S was originally expected to be built on 10nm but ended up built on Intel's venerable 14nm process with "backported features" from the 10nm process, which still wasn't quite ready for (desktop) primetime. Power and performance The key takeaway here is that, while Intel is bumping up its core counts significantly, it's only doing so by adding efficiency cores—the performance core counts are expected to remain unchanged at 6, 8 and 8 for the Core i5, Core i7, and Core i9 families, respectively. Intel seems to be banking on adding efficiency cores as a way to become a bit more competitive in multi-threaded workloads without repeating AMD's disastrous mistake in the Piledriver era—that is, attempting to compete on raw performance by creating such a power-guzzling monstrosity that nobody wants to live with it. Of course, this ignores a couple of room-filling elephants. On the performance side, it seems unlikely to expect an i9-12900K with 16 total cores to compete well with its closest AMD equivalent. A current-generation Ryzen 9 5950x already offers 16 cores—and all 16 are performance cores. Ryzen 9 5900X comes in at "only" 12 cores—but again, all 12 are performance cores. We're not sure what Team Red has in store for its next generation yet, but we certainly aren't expecting core count decreases. On the power-consumption side, we're only looking at Intel's PL1 TDP rating—the TDP necessary to run the chip off turbo. We think it's too early to give any credibility to PL2 rumors for Alder and Rocket Lake, but we know Rocket Lake-S' "125W TDP" parts can easily hit 200W or higher when running at turbo frequencies. Anandtech shows peak TDP of up to 300W for the i9-11900K, with even the i5-11600K coming in at 210W—while AMD's peak, on-boost power consumption stays below 150W on 12-core Ryzen 9 5900X and 16-core 5950X. Conclusions First and foremost, we want to remind everyone that the plural of "rumor" is not "established fact"—nearly all of this reporting relies on data leaked from various sources throughout 2021. We can be certain that Alder Lake and Raptor Lake will use a hybrid big.little design as described—but most everything else is still up for grabs to one degree or another. We're reasonably confident that Alder and Rocket Lake-S SKUs will exist with core counts as shown here and that TDP won't change significantly as compared to Rocket Lake-S. But it's unclear whether these designs will successfully follow Apple's M1 in securing the big.little hybrid design's place in desktop computing history or be relegated to the dustbin like last year's Lakefield. AdoredTV's recent leak provides much of what little information we have on expected Raptor Lake designs. Intel leaks show next-gen desktop CPUs with hybrid “big.little” design
  17. Intel has released new Wireless and GPU drivers which includes optimizations for Windows 11. The GPU drivers were released on the 11th of August and besides bringing explicit support for Windows 11, has the following changelog: This update fixes crashing problems observed in games like Doom Eternal, Chivalry II, Scarlet Nexus, Total War: Warhammer II, Horizon Zero Dawn, The Witcher 3, and as well as back screen when playing the Warframe in DX11. Intel has fixed an issue that prevents the operating system’s dual boot menu from working. A bug that causes screen flickering in Microsoft Edge has been fixed. The updated Wireless drivers (WIFI and Bluetooth) were released today, the 14th of August 2021 and also brings support for Windows 11, with the following changelog: Support for Windows 11 A bug where LG Television fails to connect to Miracast when using Intel’s Wi-Fi has been fixed Audio quality of Microsoft Teams when you attend the calls using a Bluetooth headset is improved. Bluetooth mouse movement should now be smoother. The Wi-Fi driver update is available for the following wireless adapters: for AX210/AX201/AX200/9560/9260/9462/9461. for 8265/8260. for 7265(Rev. D)/3165/3168. The Bluetooth driver update is available for the following wireless adapters: Driver version – AX210, AX201, AX200, 9560, 9462, 9461 Driver version – 9260, 8260, 18260, 8265, 18265 Driver version – 3168, 3165, 7265, 17265 The optimised drivers can be downloaded using Intel’s Update Assistant Tool. via WindowsLatest Intel updates Wireless and GPU drivers with Windows 11 support
  18. Intel's 3nm orders include a new GPU and three server chips Intel may have pulled a fast one on its industry rivals by buying up a majority of the 3nm node capacity at TSMC in order to fabricate its new GPU and a number of server chips, something that could inhibit AMD and Apple from ramping up production of their own next-gen chips in 2022. Production with the 3nm node is expected to start in Q2 2022, with mass production expected to start in mid 2022, according to Wccftech. Production capacity would reach about 4,000 in May of next year, with mass production capacity ramping up to 10,000 wafers a month. Intel, unlike AMD or Apple, has its own fabrication plants that it uses for most of its chip production, though it has struggled in recent years to hit its own development roadmap targets. None of the 3nm node process orders will be for its consumer market processors, namely Raptor Lake, which is expected to launch in mid-to-late 2022. Instead, reports indicate that the product lines in the order will coming out of its graphics and server units, specifically a new GPU and three new server processors, most likely next-gen Xeon processors meant for data centers. Analysis: Is the Intel Iris Xe graphics card finally making it to production? We don't know much about any of these chips yet, though the GPU could be the long-awaited Intel discrete graphics card based on the company's Iris graphics processor. The Intel Iris Xe graphics card has been in the works for a long time now, but we've yet to ever really see it beyond some prototypes and presentation material. If Intel is making a substantial investment in its graphics unit however – and eating up a substantial chunk of TSMC's 3nm node capacity in the process – then we certainly hope this is an indication that Intel's discrete graphics card is on its way to customers. While we're not expecting it to immediately dethrone AMD and Nvidia in the graphics card space, turning things into a three-way fight rather than a head-to-head matchup will push the three companies to innovate even more. This can only be good for gamers and other PC enthusiasts in the end, assuming we're ever able to get our hands on any of these graphics cards in the first place. Intel locks down all remaining TSMC 3nm production capacity, boxing out AMD and Apple
  19. Can Team Blue return to pole position with Alder Lake? AMD's new Ryzen processors are dominating the most popular online retailers in the US, accounting for as much as 75% of new desktop CPU sales and filling up the bestseller lists at major online retailers. Team Red's latest Ryzen 5000-series processors have proven to be wildly popular with builders – a major turn around from years past when Intel Core processors were considered the gold standard for new desktops. According to a deep dive by PCGamer, new AMD Ryzen processors fill out the top seven bestselling processors on Amazon and account for eight out of ten overall. Meanwhile, the only two Intel chips in the top ten are both 10th-gen processors, with Intel's latest Rocket Lake Core-series processors not appearing until much further down the list at #37, as of this writing. On Newegg, Intel does manage to place better with its latest Core i7-11700K processor hitting #5 on the list, with the Core i7-10700K and Core i9-11900K placing at #8 and #9, respectively. Newegg has more of a reputation as a PC component retailer than Amazon, so the rankings of Intel's latest Rocket Lake processors are definitely more reflective of enthusiast interest, which is a good sign for Rocket Lake, at least. We also reported last week that AMD sales at German retailer Mindfactory accounted for roughly three out of four chips sold, though this was an improvement for Intel as Team Blue only accounted for 14% of sales the month before. Even with this uptick, though, Intel's latest chips do appear to be struggling against AMD's Ryzen 5000 onslaught for desktop CPU market share. We've reached out to Intel for comment on these latest figures and will be sure to update the story when we hear back from the company. (Image credit: Intel) Needless to say, this definitely puts pressure on Alder Lake to perform better than Rocket Lake has so far, but there's some wide latitude for both optimism and pessimism on that front. Alder Lake processors are going to be Intel's first major push into the big.LITTLE architecture pioneered by ARM that is used in the Apple M1 and Apple M1X chips. This architecture does put a higher emphasis on power efficiency over raw computing power, though it has been making significant strides in the performance department. Whether that performance increase is enough to entice desktop builders to go with Alder Lake remains to be seen, though the first Alder Lake-S chips are expected to launch later this year specifically for this enthusiast market, so we won't have long to wait. If Intel is able to pull a silicon rabbit out of a hat on this one, then it might be a whole different story next year given Intel's latest chips will have several months head start over the expected late-2022 launch of AMD Zen 4. AMD's next chip will not feature a big.LITTLE architecture, so it will be an interesting – and high states – test of whether Intel is able to out-engineer AMD back into its once-vaunted leadership position. AMD wallops Intel in desktop CPU sales from major retailers – for now
  20. Intel has provided new details regarding its latest fab complex in the United States. Intel CEO Patrick Gelsinger stated that the new fab campus will cost between $60 billion and $120 billion. Gelsinger also stated: We are looking broadly across the U.S.. This would be a very large site, so six to eight fab modules, and at each of those fab modules, between 10- and $15 billion. It's a project over the next decade on the order of $100 billion of capital, 10,000 direct jobs. 100,000 jobs are created as a result of those 10,000, by our experience. So, essentially, we want to build a little city. By the end of this year, Intel will finalize the location of its upcoming major semiconductor manufacturing hub in the United States as part of its IDM 2.0 strategy. The complex will incorporate between six and eight modules that will manufacture chips by employing the avant-garde fabrication processes of the company. Furthermore, it will be capable of packaging chips by making use of Intel's propriety techniques such as EMIB and Foveros, and will operate a dedicated power plant. Intel has not revealed which nodes the initial module of the latest facility will support but since it will probably be operational as early as 2024, the fab will likely manufacture chips by employing the Intel 4 and the Intel 3 manufacturing technologies. The production capacity of the new facility has also not been revealed yet. Gelsinger further said that: We're engaging with a number of states across the United States today who are giving us proposals for site locations, energy, water, environmentals, near universities, skill capacity, and I expect to make an announcement about that location before the end of this year. Each semiconductor fabrication module will cost between $10 billion and $15 billion so, there is a possibility that the investments by Intel into the complex over the course of the next ten years could top $120 billion or be as low as $60 billion. Source: Tom's Hardware Intel has disclosed new details about its fab complex that will resemble a little city
  21. Intel only recently launched the NUC 11 Extreme called "Beast Canyon" with 11th gen Tiger Lake processors and it looks like we already are getting leaks related to the next generation hardware. And if the leak today is accurate, then the upcoming NUC 12 would be really special as it could actually feature Intel's own high-end Xe HPG discrete graphics. The next-gen NUC will apparently be called the NUC 12 Enthusiast and an alleged picture of it with its specifications has leaked on Weibo. As mentioned above, the highlight of this leak is the possibility of the NUC 12 having Intel's own Xe HPG based discrete GPU (DG2). In fact, according to this image, there will be the option to choose between three separate graphics SKUs that will carry 16GB, 12GB, and 8GB VRAM. So, the SKU1 with the 512 Execution Units (EUs) must be on offer here. We aren't really sure about the two other variants since the memory capacities of those haven't been confirmed yet. So those could be among the rest of DG2 models except SKU5 since it has 4GB VRAM. Now you may be wondering that these GPUs could well be AMD's RX 6000 series GPUs since the VRAM amounts match that of Navi 21, Navi 22, and Navi 23. A tiny detail on this alleged next-gen NUC 12 however rules that out. Apparently, the NUC will feature DisplayPort 2.0 option which is absent on Navi cards as well as Nvidia GeForce RTX 3000 series cards as well. CPU wise the NUC 12 will likely feature some form of Intel's next-gen Alder Lake-P mobile processors. In terms of system memory, it seems like there is no change from this year. Source: 从未完美过 (Weibo) via HXL (Twitter) Next-gen NUC 12 could pack Intel's very own discrete GPU (DG2) with up to 16GB VRAM
  22. Intel will make Qualcomm chips in new foundry deal In addition to providing chip packaging for Amazon’s AWS business Intel has announced its first major customer for its new Intel Foundry Services business: Qualcomm. Best known for designing the Snapdragon chips that power most major Android phones, Qualcomm will start to have its chips manufactured by Intel in the coming years using Intel’s upcoming 20A process. No timeframe has been announced for when the first Intel-made Qualcomm chips will arrive or which of Qualcomm’s products Intel will produce. Additionally, Amazon’s AWS will be working with Intel Foundry Services, relying on Intel’s packaging solutions (although Intel won’t be directly making chips for Amazon). Qualcomm will rely on Intel’s newly announced Intel 20A technology node, scheduled for release in 2024. Intel 20A will introduce a new transistor architecture, RibbonFET, Intel’s first since 2011. Intel previously announced its new foundry business as part of new CEO Pat Gelsinger’s “IDM 2.0” strategy shortly after he took the reins of the company. Intel Foundry Services was a key part of that plan, one that would see Intel expand beyond making its own chips to handle production for third-party companies. Qualcomm and Amazon mark the first — and the highest-profile — partners that Intel has announced so far for IFS, although Gelsinger has previously mentioned that Intel is in talks with over 100 companies for foundry work. Intel will make Qualcomm chips in new foundry deal
  23. Intel has a new architecture roadmap and a plan to retake its chipmaking crown in 2025 All the news from Intel Acclerated Intel is rethinking how it releases — and brands — its semiconductor innovations, CEO Pat Gelsinger announced today at the company’s Intel Accelerated webcast. The announcement includes the broad strokes of the next half-decade of Intel’s processor roadmap, new chip and packaging technologies, and a promise of an “annual cadence of innovation,” with the ultimate goal of seeing Intel retake its leadership in the processor space by 2025. Future Intel products (starting as early as its upcoming 12th Gen Alder Lake chips later this year) will no longer use the nanometer-based node nomenclature that both it and the rest of the chipmaking industry has used for years. Instead, Intel is debuting a new naming scheme that it says will provide “a more accurate view of process nodes across the industry” and how Intel’s products fit into that landscape. How that works in practice is that those new third-generation 10nm chips will be referred to as “Intel 7,” instead of getting some 10nm-based name (like last year’s 10nm SuperFin chips). At first glance, it sounds a lot like a cheap marketing tactic designed to make Intel’s upcoming 10nm chips look more competitive next to products from AMD, which are already on TSMC’s 7nm node, or Apple’s 5nm M1 chips. And while that’s technically true, it’s not as unfair of a comparison as it necessarily looks. In modern semiconductors, node names don’t actually refer to the size of a transistor on a chip: thanks to advances like 3D packaging technologies and the physical realities of semiconductor design, that hasn’t been the case since 1997 (as noted by ExtremeTech). And from a technical perspective, Intel’s 10nm chips are broadly on par with “7nm” branded hardware from competitors like TSMC or Samsung, using similar production technologies and offering comparable transistor density. That’s something that translates to commercial hardware, too: we’ve already seen that Intel’s current 10nm chips are still competitive with AMD’s cutting-edge 7nm Ryzen chips, for example. All that is to say that Intel’s rebranding here isn’t entirely unfair to see, even if it does make it harder to parse when those bigger “node” change advances are happening with the new nomenclature. Intel’s updated roadmap and node naming Image: Intel Here’s a look at Intel’s new roadmap and what it all actually means. • Intel 7 is the new name for what would have been Intel’s third-generation 10nm technology and the successor to Intel’s 10nm SuperFin (aka Intel’s second-generation 10nm chips, found most notably in its 11th Gen Tiger Lake chips). Intel says that the new Intel 7 hardware will offer approximately 10 percent to 15 percent improvements in performance-per-watt compared to the previous generation — or, as is always the case, improved power efficiency and battery life should hardware manufacturers prefer to keep performance the same. The first Intel 7-based products will show up as early as this year, with the already previewed Alder Lake chips coming at the end of 2021 for consumer products, and the upcoming Sapphire Rapids chips in 2022 for data centers. • Intel 4 is the architecture formally known as Intel’s 7nm process, which Intel infamously was forced to delay out to 2023 last summer following manufacturing issues. Originally planned for 2021, it’s the next major jump in technology for Intel, using EUV (extreme ultraviolet) technology — something already utilized by Samsung and TSMC’s 5nm node products, for comparison. It’ll still use the same broad FinFET transistor architecture that Intel’s been using since 2011. Thanks to all those improvements, Intel 4 is expected to feature a transistor density of about 200-250 million transistors per mm², compared to about 171.30 million transistors per mm² on TSMC’s current 5nm node. Intel says that Intel 4 will offer an approximately 20 percent jump in performance-per-watt while cutting down on overall area. Production is set for the second half of 2022, with the first Intel 4 products planned for 2023 (Meteor Lake for consumer products, and Granite Rapids for data center). • Intel 3, set for manufacturing in the second half of 2023, is the new name for what would have been a second-generation 7nm product under Intel’s previous naming scheme. Like Intel 4, it’s still a FinFET product, although Intel says it’ll offer additional optimizations and use of EUV for roughly an 18 percent increase in performance-per-watt compared to Intel 4. No release date or product names for Intel 3 chips have been announced yet, but presumably, they won’t be available until 2024. Intel 20A is the name for the next generation of Intel technologies that, under the old scheme, would have been the architecture following the formerly branded 7nm node. It’s also the most substantial announcement that Intel made today, technologically speaking, one that will see Intel debut its first new transistor architecture since FinFET in 2011, called “RibbonFET.” The new architecture will mark Intel’s first gate-all-around transistor, a fundamentally new transistor technology for the company that promises greater transistor density and smaller sizes. Additionally, 20A will see the introduction of “PowerVia,” a new technology that allows for wafers to be powered from the back of the chip, instead of requiring power to be rounded to the front. A closer look at the coming updates for Intel 20A. Image: Intel The “20A” in the title is meant to evoke the “Ångstrom era” of semiconductor design — an Ångstrom being a unit of measurement smaller than nanometer. (20Å = 2nm, although, like the other rebranded Intel names above, Intel 20A doesn’t refer to a specific measurement on the products themselves.) Intel’s 20A isn’t expected to ramp until 2024, and, like Intel 3, it doesn’t have any formally announced release date or products yet. Intel 18A is the farthest in the future piece of Intel’s roadmap and will feature the second generation of Intel’s RibbotFET technology for “another major jump in transistor performance.” Intel says that Intel 18A is in development for “early 2025,” and that it expects this generation of technology to re-establish its semiconductor leadership. In addition to all of its process roadmap news, Intel also announced two major updates to its Foveros chip-stacking packaging technologies (the second-generation of which is set to debut in Intel 4’s Meteor Lake in 2023.) Foveros chip stacking combines several hardware elements into a single die, like Intel’s Lakefield chips, which pile together five CPU cores, an integrated GPU, and DRAM into a compact stack to save internal space compared to a traditional design. Image: Intel Foveros Omni will allow for more variety in stacked chips by making it easier to mix-and-match tiles, regardless of their specific size — for example, allowing for a base tile that’s smaller than the top tile in a stack. And Foveros Direct will allow for direct copper-to-copper bonding between components, reducing resistance and decreasing bump pitches. Both of the new Foveros technologies are planned for production in 2023. Intel’s new names may help the company recontextualize its current and future products more accurately against its competition, but the fact remains that Intel is behind. Even accepting that the Intel 7 is on par with 7nm products from other foundries, those foundries are already past their 7nm chips and on to 5nm hardware. Which means that the companies that rely on those external foundries — like Apple, AMD, Nvidia, Qualcomm, and virtually every other major tech company — can still get chips that are more advanced than Intel’s best work. Apple’s superlative M1 Macs, for example, already use 5nm chips from TSMC — and handily outpace Intel’s comparable products. AMD is rumored to be working on 5nm Zen 4 processors for as early as 2022, too, which could offer similarly concerning competition for Intel from its already encroaching competitor. Even with the ambitious, annual cadence for its roadmap, Intel is playing from behind; it doesn’t expect to fully catch up to the rest of the industry until Intel 20A in 2024. And it doesn’t expect to reclaim leadership in the semiconductor business until 2025 with Intel 18A. And all that assumes that Intel doesn’t hit any more delays or manufacturing snags like the ones that held up both its 10nm and 7nm processes (which arguably put the company in its current situation in the first place). After years of setbacks, though, it’s clear that the revitalized Intel isn’t going down without a fight. But the next few years will see whether its efforts are enough. Intel has a new architecture roadmap and a plan to retake its chipmaking crown in 2025
  24. Intel warns of CPU stock shortages in near future Intel CFO says to expect ‘acute’ supply problems in Q3 Intel has warned that we are entering a period where the supply of its processors for consumers is going to become thin on the ground, so unfortunately, we can expect trouble ahead on the stock front. In fact, Intel’s chief financial officer, George Davis, has said there will be ‘acute’ supply problems in Q3 (from now through to September), following CEO Pat Gelsinger’s recent observation that the global chip shortage is likely to continue for one or two years, so perhaps until 2023. Davis said on an earnings call (as transcribed by Seeking Alpha😞 “Persistent industry-wide component and substrate shortages are expected to lower CCG (Client Computing Group) revenues sequentially. We expect supply shortages to continue for several quarters but appear to be particularly acute for clients in Q3. In data center, we expect enterprise, government and cloud to show further recovery in Q3.” Heavyweight priorities Intel is prioritizing the production of heavyweight processors for the likes of data center use because that’s where the big profits lie (and also due to contracts the chip giant is bound by). Of course, if you only have a certain amount of production capacity, it makes sense to produce the biggest earning CPUs with the beefiest premium. Indeed, we’ve seen this before when Intel has struggled with supply issues, with the firm sacrificing lower-end processors to produce high-end Core models and server chips. The central issue here is a problem with substrate (the base layer of the chip) supply, and as Tom’s Hardware, which picked up on this, points out, the specific issue is with insufficient Ajinomoto ABF substrates. Lacking these, Intel is constrained in terms of the amount of CPUs it can actually make, and while the company has made moves to work on self-sufficiency – finishing production of ABF substrates in-house, and also investing in ABF substrate makers – these efforts won’t have an impact to mitigate shortages until later in the year. Davis said: “We did a really good job of eating up a lot of our substrates, some of which we thought we would have available to us in Q3. He added: “In Q3, we could see we had a real supply challenge, it is acute. But Q4, we are doing everything we can to help our substrate suppliers increase supply, including finishing up some of their manufacturing in our own facilities, which is something we could do with as an IDM [integrated device manufacturer].” Analysis While stock shortages are hardly anything new these days, and we’ve previously been told that component supply issues aren’t going away anytime soon – despite some glimmers of hope regarding GPUs – this latest admission from Intel is certainly disappointing. More to the point, the production issues for Q3 being referred to as ‘acute’ and rather serious sounding is an ominous note for those looking to pick up an Intel chip in the near future. Furthermore, remember that Intel is preparing to launch its next-gen Alder Lake CPUs later in 2021. Given the increasing amount of leakage around Alder Lake, and the fact that sample chips are reportedly already being slyly sold over in China, it seems like the 12th-gen processors are on target for a rumored October launch. That’s Q4, of course, where Intel expects matters to pick up a little from Q3, but if Alder Lake turns out to be as promising as the grapevine reckons, it’ll likely be in serious demand. And given what we’re hearing here, it’s not difficult to imagine another situation where CPU stock becomes a struggle to find with new models, and scalpers then getting involved with the usual results and spiralling prices. Intel warns of CPU stock shortages in near future
  25. Intel says its 10nm production is now exceeding 14nm wafers Intel reported its second-quarter (Q2) 2021 financial results last night. Revenue-wise, Team Blue hasn't seen much change in its situation compared to last year's Q2 results. Worrying for Intel is the fact that its data center group (DCG) revenue has been reduced by 9%, probably due to the excellent performance offered by AMD's EPYC server CPUs. The non-GAAP earnings per share overall though have improved by 12%. For those interested, you can find the full earnings report here. Aside from that, the company also had some positive news to share regarding its long-delayed 10nm FinFET process. Pat Gelsinger, the new CEO of Intel has stated the company is "now manufacturing more 10-nanometer wafers than 14-nanometer". Here's his full statement from a call transcript (via Seeking Alpha😞 Under IDM 2.0, our factory network continues to deliver and we are now manufacturing more 10-nanometer wafers than 14-nanometer. As 10-nanometer volumes ramp, economics are improving with 10-nanometer wafer cost 45% lower year-over-year with more to come. A few years back, Intel had been pretty confident about its 10nm process often touting its advantages against competitors' nodes as well as its own previous processes. The company originally hoped to launch the first 10nm-based chip in the form of the Cannon Lake family of processors back in 2015-16. Evidently, that didn't happen and the company was forced to revise its famous "Tick-Tock" cycle to the 'Process-Architecture-Optimization' (PAO). The company finally launched its first 10nm processor with the hybrid Lakefield design which was for mobile. Desktop users are yet to get a taste of Intel's 10nm process but thanks to the improving yields of the node, we would likely see that soon on this form factor too starting with Alder Lake-S. Intel says its 10nm production is now exceeding 14nm wafers
  • Create New...