5

Micro Center leaks specs and pricing for Intel’s new Alder Lake Core i9 CPU

 2 years ago
source link: https://www.theverge.com/2021/10/22/22740133/intel-core-i9-12900k-spec-release-date-price-alder-lake
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Micro Center leaks specs and pricing for Intel’s new Alder Lake Core i9 CPU

The hybrid architecture CPU will apparently go on sale November 4th for $669.99

By Richard Lawler@rjcc Updated Oct 22, 2021, 2:07pm EDT
Screen_Shot_2021_08_19_at_8.16.23_AM.5.png

Even though someone managed to secure early purchases of a couple of Core i9-12900K CPUs, we still didn’t have detailed information about Intel’s upcoming line of Alder Lake chips. A (now-removed, but shown below) sale page popped up on Micro Center, suggesting the chips will be available for $669.99 when they launch on November 4th.

micro_center_alder_lake_core_i9_12900k_specs.jpg

The spec sheet reveals this chip will have a 3.2GHz operating frequency, capable of 5.2GHz in Turbo Mode, with 16 cores and 30MB of L3 cache. As expected, the specs reveal it also supports DDR5 memory, PCIe Gen 5, and the Intel Z690 chipset motherboards that should go on sale at the same time.

During its Architecture Day 2021 event in August, Intel promised Alder Lake chips with 16 cores using eight performance cores and eight efficient ones. This leak lines up with the top-end promises, right down to listed support for 24 concurrent threads (two each on performance cores, one thread at a time on the efficient cores) and 125W thermal power.

Update October 21st, 2:07PM ET: Noted listing on Micro Center has been removed.

Sign up for the newsletter Verge Deals

Subscribe to get the best Verge-approved tech deals of the week.

Email (required)
By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy.

There are 31 comments.

PCIe Gen 5

Gen 4 just released so is this a typo?

Posted  on Oct 22, 2021 | 9:58 AM

I think Intel is moving to Gen 5. Gen 3 lasted a really long time. Gen 4 not so much…

PCIe Gen 4 launched in 2017. It is just that vast majority dont use it and dont need. PCIe Gen 4 is starting to get more mainstream adoption so PCIe Gen 5 is just unecessary unless you are a company.

So not true; that is very much a mainstream consumer thing to say that’s short-sighted and naive… Pros need PCie5 gen now for stuff like RAID NVMe SSD use cases and NVMe PCIe4 SSDs have effectively already saturated PCie gen4

There’s also a new PCIe5 connector for GPUs that’ll dramatically reduce the amount of 8pins needed to power modern high-end GPUs; you could use that connector instead of 3 8-pin cables which is huge for modern computing–especially prosumers and pros.

omo

The lack of pickup on PCIe4 is also an intel blunder, they were late to this.

Posted  on Oct 22, 2021 | 2:10 PM

Honestly I don’t really see the point of having a big.LITTLE-eque core configuration in a desktop CPU… I guess the idea is that you can get good single-thread performance and reasonable multi-thread performance at the same time without your desktop becoming a space heater?

It’s a little of that. Basically the idea is that all background tasks (virus scanning, backup jobs, downloading updates, other OS tasks) can be told by the scheduler to use the efficiency cores instead of the performance cores. Essentially this means your power hungry performance cores that are great for gaming or productivity tasks will be free more often, with a side benefit of it running a little more cooler (in theory).

A great example of this architecture is Apple and their new M1 lineups. Their scheduler does a great job at deciding what of the tasks that come in go where. They also designed it to be slightly aggressive with where the task goes. Here’s a good article describing it:
https://eclecticlight.co/2021/05/17/how-m1-macs-feel-faster-than-intel-models-its-about-qos/

If intel cared about efficiency they wouldn’t let some of their cpus approach 300w of power usage. Big/little is pointless on the desktop.

Posted  on Oct 22, 2021 | 2:25 PM

If intel cared about efficiency they wouldn’t let some of their cpus approach 300w of power usage.

Yes, but not all the time. The highest end chip, i9-11900k can hit near 300w of peak draw, it’s not sitting there all the time.

I don’t disagree, but the issue is they can’t fix the efficiency or the power draw without redesigning the chips in the first place. That 300w peak draw? it’s still on the 14nm process that’s been pushed to it’s limits. Yes the 11th gen is 10nm ported to 14nm, which looses efficiency . It’s more or less their answer to AMD, basically "give it as much juice so we can have the best all-core clockspeed so we can claim to be competitive".

While big.LITTLE doesn’t give as much performance benefits to the desktop space, it’s going to allow them to redesign their chips for both performance and efficiency.

This will also help with the PC power consumption limit "bans" that states like CA, OR, WA, and HI as well as BC in Canada and countries like Japan are putting into place. If the move to 12VO PSUs along with big.LITTLE cpu design is able to make the computers practically sip power during idle or average usage, it’ll be great for everyone.

Posted  on Oct 22, 2021 | 2:49 PM

Big/little is pointless on the desktop.

You’re wrong about this.

"little" cores have relatively higher performance vs. die space + power ratios compared to "Big" cores.

Looking at the Apple A15 the "Big" cores take up more than 2x the die space as the efficiency cores. They also use over 3x the power to get slightly over 2x the performance of the "little" cores.

By adding "little" cores to desktop chips you can net better multithreaded performance in "real-world" scenarios where the computer isn’t running a single application.

Think of it as being an improved version of hyper-threading.

Posted  on Oct 22, 2021 | 2:54 PM

Comparing apples cores to intels is pretty funny seeing how the new m1 walks all over anything intel has, let’s wait till the reviews are out first.

Posted  on Oct 22, 2021 | 3:18 PM

Did you even think about that before you typed it?

"Big little is stupid on the desktop!"
"Intel CPUs use too much power!"

Gee, if only there was some solution to lower power usage… like, if you had some more efficient cores for when you didn’t need the ones that go all out… too bad that’s stupid.

Posted  on Oct 22, 2021 | 6:28 PM

I guess the big problem I see is that schedulers on Windows have very often sucked (and I say this writing from a Zen3 desktop). I honestly feel like a big part of the reason that Apple’s chips look like such beasts is just because they 1) are willing to throw a decent amount of cache at all the right places [versus say Qualcomm on mobile] and 2) they’ve got a great scheduler that’s tightly integrated with the rest of their system.

My gut says that the scheduler on Windows had anywhere near the level of optimization that macOS does, you’d see a substantial performance uplift in some tasks (this also explains why Apple’s devices have looked like absolute beasts in some benchmarks while being basically the same as competitors in others once you normalize things). Of course, that’s easier said than done, and Apple’s task is pretty easy since it’s one OS that runs on, what, 3 chips now? on the Mac end.

Posted  on Oct 22, 2021 | 4:27 PM

You do know that Win11 specifically addresses thread scheduling issues of the past, right.

Posted  on Oct 22, 2021 | 5:40 PM

Fair enough. I’m just sitting here wondering how Windows is going to handle this when Microsoft can’t even get it to play nice with AMD CPUs.

Posted  on Oct 22, 2021 | 4:55 PM

You’re not wrong. The big.LITTLE design is primarily for portable uses. However, Intel is spending billions on adding big.LITTLE design to catch up to arm/apple/amd. You don’t spend billions on a new design and just leave it out for half your market share (desktops), so they will include little cores on desktops. Should improve idle efficiency, so it’s only a good thing. Plus, having big.LITTLE on desktops encourage Microsoft and developers to account for it.

Posted  on Oct 22, 2021 | 2:14 PM

There’s also power use regulations, even for desktops, and the power usage is causing problems for their customers. Anywhere Intel can do this is a good thing even if they’re finally getting off 14nm.

https://www.windowscentral.com/power-regulations-cut-select-dell-pcs-certain-us-states

Posted  on Oct 22, 2021 | 2:40 PM

What I’m wondering is whether this 8 big 8 little architecture is supposed to be their top end chip. We’ll have to wait for benchmarks, but if this is the case, it would seem that they’re already ceding the top end of the market to AMD. Unless they radically improve performance, it wouldn’t even be competitive with AMD’s last gen 5900X/5950X with their 12/16 cores.

Posted  on Oct 23, 2021 | 4:39 AM

No. The idea is that when you’re working on an excel sheet or PowerPoint slide… Or reading The Verge on Chrome, you don’t need powerful processing for that.

Chrome is maybe a bad example…

Posted  on Oct 22, 2021 | 2:54 PM

You got CPU 3D Render apps and the next comes Chrome. The only good thing about it , it advances the hardware industry. Minimum CPU will have to be more powerful just to be able to run latest ver of Chrome.

Posted  on Oct 22, 2021 | 6:56 PM

Excel is a bad example. I work on monstrous spreadsheets where a simple function can suck up gigabytes of Ram and bring 12 core cpus to their knees.

Posted  on Oct 22, 2021 | 7:05 PM

Frankly Excel is just terrible at its job. For any sort of large scale data analysis, Python/R/Julia and their specialized libraries would do a much better job.

Posted  on Oct 23, 2021 | 4:45 AM

Every few weeks Intel has a new processor.

Yes, it is their business after all.

Posted  on Oct 22, 2021 | 2:18 PM

But this is finally their desktop chips getting off 14nm I believe, after all these years. It’s a big deal.

Posted  on Oct 22, 2021 | 2:41 PM

I want inclusive LL Cache or GTFO.

Posted  on Oct 22, 2021 | 2:26 PM

Why when I see turbo speed it calls me back to the days of hitting the turbo button on my 486 DX2 50.

Ah PC hardware nostalgia

Posted  on Oct 22, 2021 | 2:30 PM

I don’t see the little cores as pointless on desktop, but it’s important to consider how much a little core can do compared to a thread on a big core.

Are they doing this for real-world usability, or to check a box or amortize development costs from the laptop chip side?

Posted  on Oct 22, 2021 | 4:04 PM

If I had to guess it’s probably both. The power efficiency of their previous gen CPUs was already significantly worse compared to AMD’s offerings, and I’d imagine doing something like this is way easier for Intel to do than a complete architecture overhaul or going to a smaller node.

Personally, I’m interested in AMD’s take on heterogeneous core configurations. Ryzen cores are already pretty modular so I hope we see something similar coming from team red in the near future.

Posted  on Oct 22, 2021 | 4:52 PM

Something to say?
or

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK