Just built a new rig last week. This isn't at all surprising. Prices are so out of whack right now. $1000 used to be the ceiling for the highest end video cards (like a 1080 Ti), now $1000 is the floor. There are no good current gen (or even previous gen) cards available for under $1000. The 4080 is a horrible value and still regularly listed at $1200-$1400. The 4090 is overkill and sits around $1700-$2200. Even 2 year old tech 3080's are regularly selling for near a grand. AM5 motherboards are insanely priced. Want a 10Gb onboard NIC? Be prepared to shell out $1000 just for the motherboard. Add to all of that, this latest batch of CPUs are just stupid power hungry - like 240w+ under load (except for the non-x variants of AMD 7000 series, just released last month).
It used to be you could buy a lot of computer for $2-3K, now that figure is closer to $5K. These prices, combined with the folks that just went through this pain 2 years ago during the pandemic and yeah you aren't going to see stuff flying off the shelves any time soon.
I am sorry but the "rule of thumb" website, https://www.logicalincrements.com/ disagrees with you, heavily so. You can still buy an awful lot of computer for $2k, it's right there how. The prices are real, they link -- yes, with affiliate links -- to real sales on Amazon/Newegg/etc. To quote what you can expect from their "outstanding" tier at 1628 USD:
> This expensive tier has the highest possible cards that still maintain a reasonable performance/price. Sure it is pricey, but it is luxurious!
> Expect high performance at 1440p, and solid performance at 4K even in the most demanding games.
For 1865 USD:
> Max most titles at 1440p@144Hz, and solid framerates in 4K, even with max settings.
This is misleading. For nearly $2000, you better be getting 4K with ray tracing (max settings), AKA, a top of the line device. 1440p/144 is midrange now.
Which that $1865 one does not provide. Even before ray tracing, most games struggle to hit the 144 rate you're aiming for, so with ray tracing that drops down to like ~40 (Cyberpunk 2077 for example in that 6900XT card). You have to enable workarounds like DLSS/FSR to make those games playable.
The only way you're getting good framerate at 4k is without ray tracing, but you're paying $2000 to have to worry about still disabling settings? Ridiculous!
So yes, they are overpriced. For $2000 you should not be worried about having to enable FSR.
The usual excuse when this is brought up is "well just don't play those games, they seem unoptimized" to which again, the question is, why are you spending two thousand dollars to avoid playing certain games? How absurd.
>This is misleading. For nearly $2000, you better be getting 4K with ray tracing (max settings), AKA, a top of the line device. 1440p/144 is midrange now.
You're both just setting an arbitrary baseline of performance at $2k. Arbitrary comparisons cannot be misleading.
Yes, I don't know where this person is getting the idea that you shouldn't have to worry about settings at $2k? Fifteen years ago you could spend $10k and still have to worry about settings!
Well the website is called logical increments, it's really hard to call ray tracing a logical increment. You are sacrificing too much to gain so little, at least for now.
This is how I see it now too. The manufacturers are capitalizing on people needing to have the best by making even more expensive top tier components. You don’t need 4K or ray tracing to enjoy a triple-A game. Last year, I got a 3070-equipped system, force feedback driving wheel, and a 4k TV as a monitor for under $2k. The games are still quite gorgeous, and I have no idea what I’m missing by not having something more expensive.
Ray tracing IS the logical increment the industry has decided on though. So again, you're spending $2000 to miss out on what the industry has decided on is the next logical step, only to have to spend another ~$500 for whatever upgrade to actually play those games.
>Ray tracing IS the logical increment the industry has decided on though.
I disagree. Nvidia decided it didn't had enough exclusivity, if the industry decided it was a logical increment then it would've had to come as a request from video games and video games engine developers towards hardware companies. It would've been a more natural progression, we wouldn't have it now and also it wouldn't feel like it's in a perpetual beta phase.
Sounds like you would like to use a console for gaming.
When I built my last desktop, in 2015, you would've blown way more than 2k just on the over-the-top monitor that you're mad can't be cheaply populated with high-speed, high-complexity output in 2023. Now you can get a decent quality 4k monitor plus a desktop that will drive it for years of normal tasks, all for less than 1k.
2015 is eight years ago. In PC terms, that's forever.
I got my 4k 60Hz monitor in 2017 for $400 or so. You can get them for $250 these days, so I agree that is pretty mid-range by now - or even the lower part of it. That leaves $750 for the computer.
According to the logicalincrements linked above, that gets you a RX 6600 - which they rate for 40 FPS @ 4k in The Witcher 3 - a game which came it out 2015.
So no, you cannot get a decent 4k monitor and a PC able to drive it for less than 1k. That's the entire problem: there is no midrange anymore.
Steam Survey says you're wrong. Unless something changed recently most people are 1080p and with midranges (GTX*60 & *70). I was running a 970 and 1080p until 2021. I play mainstream stuff, FotM, and competitive.
The way the market is structured just doesn't offer a whole lot of reward. Forknife, LoL, CSGO, CoD, etc... Are all built to accommodate toasters. The big titles are are built for consoles and poorly optimized, there isn't a whole lot to be had aside from higher framrates - and many times these are locked.
I upgraded for compute power.
As for resolution I'm not convinced it's worth the upgrade. Maybe in a few years.
But am generally in agreement, I did a big upgrade a while ago and while it's great for a few things, the extra hardware and resolution don't mean much while playing Stellaris.
8 years is no longer forever in PC terms. It used to be you could only go a couple years before your computer could no longer run anything but that's no longer the case.
For an example, I usually build a PC once the older one is horribly outdated. So that was 1999, 2002, 2006, 2011, and ... Well that one has turned into the PC of Theseus. I upgraded to an SSD and $300 video card in 2018, and in 2022 I bought a new cpu/mobo/ram. Still plays games at 1440p at acceptable frame rates.
If I wanted to game at 4k 240Hz, yeah I'd probably have to spend $3500 (and it still wouldn't run that great) which just tells me that 4k gaming isn't ready for prime time
Generally I agree. However, I saw negligible performance improvement from getting a ten year newer mobo/cpu/ram, so the previous trend didn't apply. I only upgraded because there were some system stability issues (probably ram caused, but buying new ddr3 ram in an attempt to fix it seemed a waste). From a performance point of view, the new GPU and SSD in 2018 were the new computer experience.
> 2015 is eight years ago. In PC terms, that's forever.
I think the point is that this isn't true anymore.
In the 80s or 90s eight years was huge in terms of computer hardware advances. In the last eight years though? Meh. Aside from one of my laptops, all of my computers are older than 2015 and they are all perfectly fine for current use. Hardware doesn't advance very quickly anymore, which is great, but bad for sales.
1440p should be midrange but the gpu makers refuse to make that a reality. How is it that decent framerate 1080p was entry level SEVEN years ago, and cards that double the speed of the rx480 are still going for 300-500 dollars?
If you spent over 2k on an over the top monitor in 2015, then you were being silly because even top of the line GPUs weren't hitting 1440 at a stable rate.
We're not talking about normal tasks, if you want normal tasks even a chromebook would be fine. We're talking about intensive tasks, like gaming, which push boundaries and justify the high cost... except for components like GPUs which are dramatically overpriced still. These components make it misleading when people then claim that you can get a top of the line system for under $2k - which is not possible (a top of the line GPU costs more than $2k alone).
You can make tradeoffs, but when paying that much who wants to deal with tradeoffs?
I use a 4k 28inch as my primary monitor for code at home (laptop screen has email and slack). I run i3wm and typically code with 4 terminals windows next splitting the monitor into 4 equal sub windows, each having 1080p and running vim. It's very nice and whenever I have to visit the office and is stuck with their 1080p monitors I find my self severely limited because I'm so used to my home setup.
I've been "coding" on dual 27" 4K@60 displays for years. Can't go back to lo-dpi displays after getting used to having truly legible/beautiful text rendering.
I love my pair of 32" 4k monitors for coding. Screen real-estate lets me have everything I want open, open. I don't have to go digging for windows. I don't have to worry about arranging windows carefully. I get to see a lot of code, and a lot of terminal history.
Gaming and TV on the other hand, I'm perfectly happy with a smaller 1080p monitor. There just isn't much to be gained by a few more pixels.
You can also save up on the case (cheaper options should be available), and grab a Ryzen 7900 which should have similar perf & comparable price point to Intels they used, and comes with stock cooler, shaving off additional ~$100. I'd also probably skip the HDD and grab 32GB RAM.
That's what I'm thinking of doing when I build PCs for my partner and I later this year. I've been looking at benchmarks and I'm not as worried about the top end of performance as much as I am Intel being about ready to release a new socket design next refresh.
Enterprise drives shipments and the PC industry is fucked because: the surge of Windows 7 migration is over, they’ve become appliances, and everyone bought thousands of laptops in 2020/1.
The only reasons to replace business desktops are swollen batteries and Microsoft. My teams are responsible for 250k devices globally. Failure rates are <3%, and 75% of failures are batteries and power supplies. With the transition away from spinning rust complete, we have more keyboard failures than desktop PC failure. I’m taking the PC refresh savings and buying Macs, iPads and meeting room hardware.
Speaking from the smaller scale side of IT, in the past ~6 months or so, I've deployed more BeeLinks ( https://smile.amazon.com/Beelink-Graphics-Desktop-Computers-... ) or even smaller, Atom-powered boxes ( https://www.amazon.com/GMKtec-Nucbox5-Desktop-Computer-Windo... ) than Lenovos or Dells. And my clients are really happy with them, too. These are, of course, still technically PC shipments, but the amount of money involved for the manufacturers is absolutely minimal. And most office workers don't need more.
Yep, tech work, and especially remote software work, is all done with laptops and docks.
I'm writing this from my home gaming rig, which is an old, not-cool-enough-for-Win-11 (thank god), desktop. I don't know what I'll be replacing it with when it keels over and dies. Maybe a Mac tower? Maybe a Linux rig. But it'll be my PC, not Microsoft's if I can help it.
I think is largely correct, I’ve been working remotely for the better part of 10 years. I used laptops for around 6. The portability is great when I was split between multiple jobs.
Changed to a desktop about 3 years ago and wouldn’t go back unless I really needed that portability again. I had forgotten how much faster a well spec’d desktop machine actually is. And upgrading parts I find a lot better than replacing the whole laptop every 2-3 years.
Currently on a Ryzen 5950x, 64gb ram, multiple gen 4 ssds, and a workstation GPU. The only laptops to beat such a setup in most tasks weigh a lot, cost more than double the desktop, and are 2 years newer.
And then you can still use SyncThing[0] to share things. Depending on what you're doing of course, and how friendly the applications you use for it are to being used that way.. but if it's all in the browser and email profiles and a bunch of data files, you're golden. Getting used to that "lifestyle" has been the biggest leap in joy of using computers since SSD, for me personally.
[0] And/or FreeFileSync or similar for manual operation: I don't want e.g. browser profile data to be synced while the browser is in use, and running the sync manually is no biggie at all.
especially with the M2 Pro refresh... it's practically a studio at that point.
Apple is conspiciously maintaining some differentiation like larger RAM sizes to force some people to the studio but the distinction really is slim at this point. Better cooling I guess? But M2 is not a hot chip. Maybe a few more thunderbolt ports?
TPM 2.0 is old table stakes, circa 2014-2015. The processor requirement (8th gen Intel) is probably more of a concern, but even that is almost 6 years old at this point. I'd wager most business laptops/desktops are Win 11 ready, though that doesn't mean anyone wants it.
TPM 2.0 existed in 2015. That doesn't mean every PC purchased then has it.
But the general point is that they keep coming up with things like this.
Compare modern CPUs to ones from ten years ago and they're maybe twice as fast per core, and have more cores which applications the majority of people use don't support. For most people the ten year old machine is perfectly adequate.
But it doesn't have TPM 2.0 or whatever, and the copy of Windows 7 it came with is no longer supported, so you eventually have to buy a new one (or stop using any supported version of Windows).
It's the same scam with phones but even worse there, because at least I can install an up to date version of Linux on a ten year old PC. Or a twenty year old one for that matter.
hardware TPM is only required if your CPU doesn't support it as an onboard thing... almost all Ryzens have TPM 2.0 support via fTPM for example.
And the hard requirement for a TPM is only for OEM procurement anyway. You as a company can probably put Win11 on anything you want, just build your image and deploy it same as ever.
Even if you can technically do it, many people won't know that when making purchasing decisions, or won't want to run hardware that isn't officially supported.
Meanwhile the requirement means third party software can soon start relying on all recent hardware having that feature and you may soon run into trouble if your hardware doesn't have it.
"Nobody got fired for buying IBM" was a thing for a reason. If you spend the same amount you did last year on new PCs, nobody asks questions. If you put the money somewhere else and then after three years Enterprise Vault Thing 4.0 comes out and requires TPM 2.0, the CEO is mad because you need three years worth of budget in one year to replace all the old PCs.
Smarter companies will find better ways to avoid this, but 50% of companies are below average.
Updooted yesterday but didn't have time to respond, but yup, fair.
Again though I think the fact that soft-TPM has existed for a long time softens that blow. If you have Intel 8000 series or higher or Ryzen 2000 series or higher (iirc) then you have TPM 2.0. Do you need a standalone TPM for a specific reason? Or maybe calling it "soft" is even a misnomer, it's hardware, it's iTPM vs dTPM I guess.
And sure at an enterprise scale going forward why not, it's ten bucks at scale. But it just seems like a lot of todo about nothing... your legacy machines can be imaged up if you want, everything purchased in the last 5 years has hardware support already, and yeah you can toss a whole second module in going forward if you want. But I don't see a hard barrier here to keep enterprise from moving to Win11.
Especially given how much hardware was purchased during COVID WFH... all of that stuff already has support, it's really only stuff that is 2+ years pre-covid that is even an issue.
I wonder if anyone has done some large scale testing on just restricting laptop charging to 80% or around that mark. I swear the large PC vendors charge these things to "full 100%" for the extra 30 minutes on some shitty video endurance benchmark and it ends up causing bloating batteries at alarming rates.
If desktop OSes stopped being garbage maybe they'd attract users back. Win still has the ti lol r crap, Linux is suffering under snaps and usual deck chair reshuffling and osx is apple imprisoned, and neglected regardless.
We measure them differently because of lower volume and a self service model - they are more like phones to us. Anecdotally, M1s so far are like iPads.
In the datacenter - absolutely. In a PC, different story. Poor thermals, rough handling and dust are brutal to mechanical parts. Solid state stuff will survive even outside of its typical operating temperature range.
>Add to all of that, this latest batch of CPUs are just stupid power hungry - like 240w+ under load (except for the non-x variants of AMD 7000 series, just released last month).
That's because in the race to get the highest benchmark scores, both companies have set the stock clocks to a level that's way beyond what's optimal (eg. adding 100W of power consumption to get 5% higher benchmark scores). The CPUs themselves are fine, you just have to adjust the power/clock limit lower.
In a desktop, hardly anyone cares. Everyone is using AIO coolers that can handle these insane temps. So the 15% improvement is usually worth it, considering the CPU will not even use these powers unless it's actually being used to the fullest.
AIOs do not significantly outperform tower coolers. Their benefit is flexibility of placement. Tower coolers require room for ... a tower. And that placement is both inflexible and in need of airflow through most of the case.
Their node disadvantage is mostly good marketing from TSMC though. Intel 10 (10nm, but that's not the real nanometers) is ~100M transistors per mm2, while TSMC 5 (5nm, but again, that's not the real nanometers) is ~130M transistors per mm2.
Sure, 30% better, but not a 2x improvement as marketing suggests.
Actual feature size stopped shrinking a long time ago, though. First they flipped the transistors on their side (FinFET) and now they're embedding them inside a block of gate (GAAFET) to get more out of the same size. But the transistors have not just diverged from their actual names (which happened a long time ago) but have not even really shrunk since like 28nm, there is no dimension of a 5nm chip that is actually 5nm. ;)
I know you're poking fun at it, but, node names are completely made up and the points don't matter these days. It is a "5nm-class node", if nodes had continued to shrink according to the original plans in 2000 or whatever.
> It's still pretty bad compared to Ryzen 7000. The non-x models are just the regular guys at 65W, like my good ol 3700x.
* 65W TDP, 88W PPT
This little bit of sleight-of-hand from AMD was really really effective, boost TDP used to be called boost TDP, and Intel expected their TDPs to cover full load when boosting or under AVX. Then AMD made up this new number but it's not TDP it's uh, PPT, and marketed all their chips under a 30% lower number.
It's obviously worked because people do this "my CPU is only 65W!" thing all the time and no, it's not, it's 88W, unless you've turned off boost.
Honestly you don't have to adjust the limit: the "power under load" angle just gets completely overblown because people go based on a reviewer's definition of load
They might be 240W under extreme load, but I can play AAA titles on my i9 at 240hz barely cracking 50% CPU load. And that's with a 3090, so not exactly a mismatched CPU/GPU situation.
At those types of loads the CPU doesn't even try to hit boost clocks most of the time, so you're nowhere near the figures you often see touted based on benchmarks.
Yeah; games to this day are still pretty bad at utilizing multiple cores. Any idiot can do the math; the i7 13700k has 16 cores, and an advertised max TDP of 250W. That's a ton of power; but if you're only really stressing 2 or 3 cores the real power draw isn't that high. These chips are so powerful that your bottleneck is almost always GPU, unless you're playing CS:GO at 1080p and aiming for 800fps, so realistically its common to see 25-60% utilization on 1 or 2 P-cores, and the rest just running Windows background shit.
This is proven by any outlet which takes the time to measure Performance Per Watt (e.g. https://www.hwcooling.net/en/intel-core-i7-13700k-efficient-...). Intel has consistently driven higher PPW with every generation, when you're comparing like-for-like binned chips. AMD, on the other hand, has been a bigger victim of what the OP is describing; while their raw PPW is generally higher than Intel, so they have room to fall, their Ryzen 7x chips aren't consistently posting higher PPW numbers over Ryzen 5x.
In other words; both Intel and AMD are doing well here. If you're stressing every core on the CPU at 100%, you're going to draw a lot of power, but you're also going to be completing workloads much faster than on 12th or 11th gen, so your aggregate power draw will be lower. The low-tier media outlets that post "OMG 250W" aren't doing research, and also don't care to, because they get clicks from tons of people like the OP who eat outrage at face value.
It's not just low-tier media outlets (at least in terms of reach), this is largely driven by large names like LTT and Gamer's Nexus
People don't realize that mid range mobile chips from both Intel and AMD out perform top of the range desktop SKUs from a few years ago because they're constantly bombarded by hot takes based on things like CPU rendering and unzipping files...
At this point I'm convinced it's just an informal cycle, where if they actually reported in a realistic manner, no release would be exciting. If they didn't pair bottom of the line CPUs with top end GPUs and odd settings configurations under the guise of "not wanting to reflect a GPU bottleneck", it'd be a lot clearer how badly needs have stagnated vs the speed of these new SKUs
>At this point I'm convinced it's just an informal cycle, where if they actually reported in a realistic manner, no release would be exciting. If they didn't pair bottom of the line CPUs with top end GPUs and odd settings configurations under the guise of "not wanting to reflect a GPU bottleneck", it'd be a lot clearer how badly needs have stagnated vs the speed of these new SKUs
Wow that's the most blatantly misleading chart I've seen out of these types of outfits in a long time!
I'm sorry this is kind of long, but I really had to make sure I wasn't missing something with how obvious there were about it.
The complaint people have is that they're not matching up hardware realistically... so they take a ~$350 Intel CPU and pit it against a ~$250 AMD CPU, instead of the ~$350 AMD sibling it launched alongside of to artificially induce a larger gap!
How much do you want to bet the same graph with a 2700x would have given a much less convincing gap?
-
Now you might be thinking I'm just being uncharitable... but they actually tested the correct SKUs specifically for future proofing, twice over the last few years! Talk about a smoking gun: https://www.youtube.com/watch?v=11GvlWamF2E
And no surprise, it took using a $1000 GPU at 1080p to shake out half the gap shown for a similarly priced GPU in the video!
And even worse, if you actually go and look at the individual titles, unlike the graphs shown here, at no point was the difference enough to affect your ability to hit a playable refresh rate! The largest differences were in games where they were both getting over 300 FPS since again, they're using a $1000 GPU for 1080p!
-
And if you're still not convinced it's an intentional attempt to push a misleading point, I mentioned they've done this comparison twice... the last time the entire point of the video was to not base your choice on future proofing! https://www.youtube.com/watch?v=aZkJ_-vfQa8
They actually concluded by saying there was no meaningful difference, gaming is largely GPU limited, and specifically replied to people claiming that you can just wait for the difference to appear:
I wouldn't hold your breath, I expect you'll upgrade your [CPU] platform once or twice before those extra cores for gaming matter
I don't know how much more blatant you can get than that...
>Wow that's the most blatantly misleading chart I've seen out of these types of outfits in a long time!
>I'm sorry this is kind of long, but I really had to make sure I wasn't missing something with how obvious there were about it.
>The complaint people have is that they're not matching up hardware realistically... so they take a ~$350 Intel CPU and pit it against a ~$250 AMD CPU, instead of the ~$350 AMD sibling it launched alongside of to artificially induce a larger gap!
I'm assuming the "chart" in question is the one shown at https://youtu.be/Zy3w-VZyoiM?t=278. The point of comparing the $250 amd cpu against the $350 intel cpu isn't to claim that they're in the same market segment, it's to point out that if benchmarked CPUs using "reasonable" GPU parings, you'll see little to no difference between high end and mid end CPUs. Now, you might argue that benchmarking this way is misleading because in reality, you'll never see the performance differential and therefore would waste money by buying the high end CPU. However, the CPUs are actually faster. Benchmarking them in such a way that their actual performance doesn't show because they're limited by some external factor would also be misleading. It'd be like putting out a benchmark between a sports car and an econobox and concluding that they're within 5% (or whatever) of each other in terms of "performance" because the speed limit is 60mph, so they both get you to your destination in around the same amount of time.
You're missing that the video is making a claim with two parts:
a) you need to use unrealistic pairings to get the true margin of the parts under review...
b) we prioritize exposing the margin to allow future-proofing your CPU choice
I'm totally agreeing that they need to use mismatched GPU to make the first point.
-
But I'm strongly refuting the second: they expose the "hidden" margin because again, the real world gains for most owners wouldn't be there and the content would be a lot less exciting.
And they're claiming that it's not about today, but it will be once future GPUs at the same price point are faster... but they concluded the exact opposite before: stating that you should not bank on GPU futureproofing changing the math on which CPU is a better choice.
And what's so damning is that when they wanted to prove that opposite take about the exact same topic they used direct competitors twice.
As soon as they benefited from the gap they decided to downgrade the CPU.
>And they're claiming that it's not about today, but it will be once future GPUs at the same price point are faster... but they concluded the exact opposite before: stating that you should not bank on GPU futureproofing changing the math on which CPU is a better choice.
Going back to your previous comment
>And if you're still not convinced it's an intentional attempt to push a misleading point, I mentioned they've done this comparison twice... the last time the entire point of the video was to not base your choice on future proofing! https://www.youtube.com/watch?v=aZkJ_-vfQa8
>They actually concluded by saying there was no meaningful difference, gaming is largely GPU limited, and specifically replied to people claiming that you can just wait for the difference to appear:
>I wouldn't hold your breath, I expect you'll upgrade your [CPU] platform once or twice before those extra cores for gaming matter
>I don't know how much more blatant you can get than that...
I don't see the contradiction here. The specific advice he was arguing against in the video is "you should get ryzen CPUs because they have more cores and therefore they futureproof better". His argument against that is that games are still single threaded and will be in the near future, so any theoretical benefits you get from more cores will take forever to materialize. He's not making a blanket statement arguing against any sort of futureproofing.
If you can't see the multiple layers of contradiction at this point, I'm not sure how much more plain it can be...
In video 1 in the series he makes a side mention about viewers previously complaining "why are you comparing these at low resolutions, the differences you're exposing won't matter".
But the point of that video is to compare two CPUs for gaming, so he goes at length about the margin exposed in a way that would imply the 8700k "won"... despite briefly mentioning "under most conditions there is no actual difference" right afterwards.
3 years after that he says does that again, going at length about the 8700k winning, but then walking it back to "but really they're about the same"
-
Fast forward to now, and this time he's making a video that directly addresses the complaints about low resolution testing.
So this time he chooses a slower CPU, even faster GPUs, ends up with slightly bigger gaps than the ones he previously wrote off (which are more than accounted for by both the even faster GPU this time, and slower competitor). Honesty would dictate a similar "they're essentially the same", but this time he goes on to beat his chest with "well this is why we call out the margin!"
I wonder if maybe you're missing that the dishonesty doesn't stem from having falsified anything, it's about the obvious conflict of interest where they intentionally set up each case to show as much as a gap as possible regardless of bearing on reality because that's the flashiest content that lets them make the hottest possible takes.
By "CPU load", loading the CPU is implied. The fact that your AAA titles don't load the CPU, because they're GPU-bottlenecked is irrelevant. Load the CPU properly, e.g. compiling, and you'll see large power consumptions.
Didn't think I needed to explain load is not a binary concept. Understanding that is just table stakes for making a meaningful contribution the conversation.
The point is that the CPUs are massively capable at less than 100% utilization.
These reviewers are covering more use cases than "24/7 Linux Kernel compiles", yet they're selling conclusions that only apply in that sort of use case as much more relevant than they actually are.
I haven't bought a non-4K monitor in over 6 years. I honestly don't know anyone who is still using 1080p monitors as daily drivers if they are also using the machine for productivity or media work. But your point is not invalid.
Modern games let you set separate rendering and display resolutions so you can get most of the benefit of a 4k display without requiring a video card that can render every pixel. The new upscaling techs address really good.
1440p monitors aren't that expensive either and for a lot of people 4K at 100% scaling is too small to be comfortable at a typical 27" size. For PC gaming a high refresh rate 1080p or 1440p display is a better buy than a 60Hz 4K one at roughly the same price.
Still using my old Dell 1440p 27" monitor to edit my 4k youtube videos. I briefly considered buying a 4k monitor this year but I spent my money on a NAS instead. I use three monitors on my desktop, the other two being 1080p. I use the 27" for my main monitor and the others are for docs and videos. I haven't bought a monitor in like 10 years because these things just keep on going. I do have a 4k monitor and work and its nice but does not feel significantly different from my old 1440p monitor. If I had more money I probably wouldn't think much about an upgrade but I work for a tech non profit and live in the Bay Area so I am not out buying new stuff that often. The NAS was a long needed upgrade to serve as a backup for my important media!
I understand when people sont want to spend their own money on displays, but as a business you wre literally loosing money by using substandard equipment, research shows its around 10% of salary, much more than a monitor costs
These days 4k 144hz screens are pretty affordable and good. I picked one up recently because I couldn't stand less than 4k to use while programming. But I agree in games it's not the biggest difference.
I tend to have most of my things half-maximized, so the same width as a 1080 but twice the vertical space. Generally three half-screens like that with vs code or a browser or whatever, and then the last half has a bunch of smaller overlapping utilities (terminal, notes, chat, etc).
I don't generally maximize windows across a whole screen except for word (~8 document pages on-screen at once) and visio.
I run single 30" 4K monitor at 100% scaling. Main reason for 4K is - I usually have 2-3 editing windows arranged side by side when coding. It fits a lot of text vertically and I like that.
As an extra benefit: I am a sucker for good photos and viewing those on large 4K is way better in my opinion. 4K Youtube and Netflix also looks better.
The text isn’t fuzzy? I have a work-provided 1080p display and it’s really noticeable switching between that and a Retina display unless I’m across the room, even without my glasses.
I was on ~100dpi monitors for years. I just picked up a 26" 4k Dell for a steal - it's noticably more crisp than my 1440p screens. I'm not getting rid of my 1200p and 1440p screens on my workstation, but ... 4k is nice.
@ChuckNorris89 man... why do they hate their devs? That's just mean =/ Hopefully they don't have you all on a bunch of $400 dell optiplexes too. Pouring one out for you brother.
>The cheapest 3050 in Canada is $389. It is ~10% slower than a 1070ti.
Fortunately, used 1070s and RX580s are 150CAD (100USD).
The combination of board prices being completely out of whack, and the performance delta of the 3080 over the 3070/2080Ti (made worse by the fact that the 4090 is that leap again over the 3080) being as large as it is, means the value proposition in the middle has disappeared.
It's not that the 3080 isn't worth 700USD or that the current 4000 series cards don't have a similar price/performance ratio- because they very much are priced appropriately; it's that the cheaper new cards (especially the 3070) have a far worse ratio than the expensive ones do. This is also partially why the most popular gaming GPU on Steam is the GTX1650.
And with current-generation console games targeting the equivalent of that 1070/2060, buying anything less than a 3080 is an objectively bad decision, unless you're someone who plays a lot of competitive shooters and thus can benefit from an intermediate for high framerate reasons. (The fact that most of those games aren't particularly fun to play also hurts the hardware industry.)
Grandparent seems to have forgotten about 4070 Ti (only $800, what a bargain!) but yeah, $800 is currently the floor for current-gen hardware in the sense that nothing has launched below $800 despite being almost 6 months into this product cycle. AMD's cheapest is a $900 MSRP (but starting to fall below that) and NVIDIA's cheapest is an $800 MSRP.
That space is currently filled by older, slower, less efficient, less-featured last-gen products. Both companies have some significant amounts of inventory they want to burn through after the mining thing and it's going slow because of the general declines in shipments.
Generally though I think people are remembering the past with rose-colored glasses... not saying OP said this in particular, but a lot of people have latched onto the idea of the "$300 x70 tier", and the x70 tier has literally never been $300 MSRP for the entire time it's existed. It's bounced back and forth between $350 and $400 even 10 years ago, $329 was the lowest price it's ever launched at and people have latched onto that one as being the price x70 has to match forever, plus a little extra. GTX 680 (full-die GK104) was $499 10 years ago, for a 300mm^2 chip, GTX 670 was a GK104 cutdown for $399 for example, and GTX Titan was where you got the full GK110 at a mere $999 (in 2012 dollars).
Ampere was somewhat below the baseline, using Samsung was an attempt to make cheaper cards and push down the cost, so by $499 being a "bargain" price (for 3070) before pandemic cost spirals got too bad, and factoring in the more expensive TSMC 5nm node, I think the realistic price for a 4070 (GA104 cutdown, whatever you call it) is probably $600-700 at this point.
So there's definitely some gouging taking place, but, a lot of people are fixated on that $300 number and that's just not going to happen. Costs have just spiraled a lot more than people realize, Pascal was not cheap either and that was 7 years ago (!) this june, and everything since then has been on older, cheaper nodes to help keep the costs down... until now. Throw in the pandemic generally blowing a lot of costs up, and yeah things are expensive now.
And yes, 4850/4870 were good cheap cards, but AMD could do that because they got onto 55nm ahead of NVIDIA, and that was back in the days when shrinking first was a real advantage, you could match a high-end card with a cheap midrange card if you got to a newer node first. That's not how it works anymore, higher wafer costs and R&D costs mean newer nodes are better but they're not really cheaper even considering you get more chips per wafer. Costs are growing fast enough to eat up the increases in density.
There are complex reasons related to patent licensing why it makes no sense to put 10Gbe on a motherboard right now. If you want 10Gbe, don't get a $1000 motherboard for it, get a motherboard with a free pcie 4.0 slot and get an adapter. That'll cost an extra ~$100 today, and an extra ~$20 in August.
Fair enough - I'm clearly paying a premium to keep a cleaner interior on the build. I tend to run exactly one card - the video card, and aim to get everything else onboard, but as you pointed out that's definitely not the only way to go, and certainly not the most cost efficient way.
Vanity can cost as much as a market is willing to pay.
From a function standpoint: I'm happy that multiple PCIe slots is still the standard. If I didn't have functional wants such as "more accelerators and more I/O" then I'd go with a cute ITX build.
I did get a compact case recently because 5.25", 3.5", and 2.5" bays are no longer an interesting use case for my daily driver, but now I find that even if I did want to shell out for a new high-end GPU the only model that would fit in my case is a perpetually sold out AIO model.
One is my new Ryzen APU in a small (sub-7-liter) case, which sits half empty because I haven't even dropped a GPU into it. All my games and CAD run just fine on the APU. All the included peripherals are more than I've ever needed.
The other is my old Athlon64x2 4850e in a bigass M3N78 Pro board, with drives and I/O out the wazoo. It's old enough to have floppy and PATA ports, but new enough to have SATA and USB too (and even a PCIe slot), so it's my media mule. I power it up whenever I need to do CD ripping, floppy archiving, that sort of stuff. I actually picked up the fastest chip that would fit the board (a Phenom II X4 945), just for giggles because they're $30 on eBay now, but promptly swapped back to the 4850e because 125 watts in a CPU is unconscionable when 45 does the job just fine.
The latter, of course, is chock-full of cables like they're goin' out of style. I made a few long floppy and PATA ribbons for working with external disks so they sit crammed up in the bottom when not in use, etc. And the non-modular PSU has like a dozen cables all splattered everywhere. It's the opposite of vanity, and I love it.
Posted the basic question in another post but I’ll ask here too. What are you using to saturate a 10gbe nic? Inet??? I find a true 10gbe inet connection unusual but I’ve been out of that game for a while.
The only ones I see are sfp+ cards, those also require transceivers (at ~$50 per) to work. The cheapest 10gbase-t cards I can see are refurbished PCIe 16x cards with active cooling for ~$40, you probably don't want those either. Really, as a consumer you want either a PCIe x4 or ideally a PCIe 4.0x1 card, with a modern much more power-efficient chip.
SFP+ uses less power, has less latency and DAC / Twinax cables cost less then 30$ for 6 meters. The only downside is that DAC is limited to 6 meters I think.
I run sfp+ 10 gig at home. You can get SR transceivers for maybe $6ea last I made a purchase. Appropriate LC fiber patch cords are similarly dirt cheap.
The current prices are indeed high, but if I would upgrade right now my desktop with the best components that I can find, that would cost only $1500.
The $1500 would pay for an AMD 7950X, an ASUS MB with ECC support and a decent configuration of the expansion slots (Prime X670E-Pro), a Noctua cooler suitable for 7950X (e.g. NH-U12A) and 64 GB of ECC DDR5-4800 memory (which for 64 GB costs $100 more than the non-ECC variant).
For the rest, I would continue to use the case, the SSDs, the 10 Gb/s NIC and the GPU that I am currently using.
If I would want to also upgrade the GPU, I would buy a RTX 4070 Ti, for 3 reasons. It provides enough extra performance to make an upgrade worthwhile, it has the best performance per $ among all recent GPUs (at MSRP, a 7900 XTX would have better performance per $, but it can be found only at much higher prices), and lastly, 4070 Ti is the only recent GPU for which the vendor specifies the idle power consumption and the power consumption during light use (e.g. video decoding) and the specified values are small enough to be acceptable in a desktop that is not intended for gaming.
There’s been a lot of ink spilled about the decline of Moore’s Law and how it hasn’t yet exactly fallen for all aspects of computer engineering. I think it’s fallen for customers, though. The economics of exponential speed improvement in traditional CPU design have gone away, and the capability/complexity ratio of software collapsed with the fall of Dennard scaling. No fundamentally new applications have come out (save ML, which is not particularly suited to CPUs or even GPUs), so consumers are happy to keep chugging along with their current setup or move more load to their phones.
Even if the increase in hardware cost stays at parity with inflation, it’s tremendously more expensive than it used to be, when waiting six months could get you more machines for your budget.
Gaming, a previous driver of high-end consumer growth, has split into truly mobile, consoles, and very high end PCs. But complex games take more capital and time to develop, so recouping costs is important (except if the studio is part of a conglomerate like Microsoft that can weather temporary market forces). I’d imagine that places pressure on game developers to aim for maximum compatibility and a scalable core. So too bad for the Mac, great for phones, and great for consoles (especially with monetizing the back catalog). And new PCs will have to fight against good-enough, and lower demand funding new hardware and software.
> Add to all of that, this latest batch of CPUs are just stupid power hungry - like 240w+ under load
Eco mode is a thing for AMD CPUs. There is no point really in not using it by default - benefits are very marginal with power cost being disproportionally huge. And AMD are doing it more for marketing reasons to gain some single digit percentages in benchmarks.
So just enable it (105 W one) and enjoy way more reasonable and efficient power usage with basically almost the same performance.
This is almost exactly my gaming setup. I know people have different expectations but in my house this is the fastest gaming computer and can play the best games (like MSFS2020) perfectly.
My main computer is an M1 MacBook Air which is essentially a perfect computer. It never feels slow. It was a little over thousand bucks. In no world I can imagine could this not be considered an amazing performance value.
Ah, those youngsters have no idea what expensive means :)
Back in the early 90s a half decent SVGA card could set you back $900 easily (e.g. ATI's 1MB Graphics Ultra from 1991).
And that's 1992 dollars, mind you. So about $1,900 adjusted for inflation.
Heck, a SoundBlaster Pro sound card was $300 back then (~$670 adjusted for inflation) - the base model for $170 was considered a steal at that price :D
And that thing couldn't do half as much as even the cheapo ALC897 sound chip in a basic motherboard.
Hence why I was big into flight simulators up to when they started asking for 486/ 4MB as baseline, it was over for me, as I had other things in life more relevant to spend the money.
I bought a 1080 quite a while after they launched. I still have it. I have thought about building a new PC lately, and it seems to me that newer cards are ... not as much as an upgrade as I might have been led to believe? Especially given the cost. If I did build a new PC today, I am not entirely sure I would buy a new video card. The performance/value ratio doesn't seem as appealing as jumping to the 1080 did at the time back then.
Well that's not too bad maybe. I probably will upgrade sometime relatively soon. It just hasn't seemed as compelling, but that could also be my enthusiasm has waned since I was younger. Hard to know, I guess.
> Want a 10Gb onboard NIC? Be prepared to shell out $1000 just for the motherboard.
quick search turns up options for around 500euros
but don't be fooled, 10GbE copper is a power hungry mess. go with lower n-baseT speeds if just want some progression for the last twenty years of networking innovation or invest in optical and get a real upgrade (20/40/100 Gbps)
What is the use case of 10Gb Ethernet for the regular PC user? Or even the enthusiast?
I have a sprawling homelab and home theater and scarcely need regular 1G. Last summer I transferred a media archive of ~10TB to new hardware, which completed overnight across Cat5. Is there some hot new hobby that generates multi gigabit bursts that I don't know about?
Editing video files. My kids are both into making videos and even though most of them are crap, I still want to get them into good habits of "store important files on the storage server" and we don't have any redundancy on local file storage. The difference between 1 and 10Gbps ethernet is noticeable there. (They only have 2.5Gbps to the desktop switch which has 10Gbps link to downstairs, so they only get ~250MB/s transfer, but they each have it simultaneously if they want.)
Without 10 GbE (or at least 2.5 GbE) using any kind of NAS is a pretty significant performance downgrade over directly connecting hard drives (even using USB 3). GbE has been too slow for even a single hard drive for at least 15 years.
Uh, at maximum speed on a gigabit network, it takes 22 hours, 13 minutes to transfer 10 TB.
That’s not overnight unless you’re a hell of a sleeper.
On a 10Gig network, assuming no other bottlenecks (harder to do), it’s 2 and a quarter hours.
Personally, I use 10 gig because 100MB/s is really slow. Even 2.5Gb is much, much better and makes it a lot more likely something besides the network is the bottleneck.
I personally move around enough data, it bothers me enough to be worth it.
Just my NVR archive is 70TB+, and that is with only a 3 month retention.
Throw in Photogrammetry data sets, decades of photos and videos, hundreds of thousands of PDF scans, and a ton more stuff, and, well. It’s a lot of data.
My only 10g link is between my main workstation and my nas. Between the SSD on my workstation and the 2 ssd array in the nas, I easily saturate 1g. Makes a big difference for video work and certain dev tasks (i.e. generating disk images). Granted I spent less than $100 for this capability using older solarflare cards and optics.
Perhaps having a 10Gbit internet connection and it is faster to download/upload things than 1Gbit? I'll never max it out, but it's $54/month instead of $44 for 1Gbit so why not. Why 1Gbit instead of 100Mbit?
AM5 w/ PCIE 5, DDR5 support and 10GBe? I've already eaten the cost, but I would love a link to the board you found. I saw a lot in that price range with 2.5 or 5GB but didn't run into any with 10GBe. That said I may have missed one. If nothing else it may be useful for when the wifes gaming rig gets an upgrade in a few months.
As to why I'm sticking with 10Gbe - I have a 24 port 10GBe switch for the house so going with the kit that matches the network I already have.
Not only are the prices out of whack, but the newest games coming out all seem to have some sort of technical issue on the PC. Shader stutter is nearly a universal thing in most new releases, or the developer doesn't optimize for the platform at all (Callisto Protocol and to a lesser extent, Hogwarts Legacy). So not only are you paying more money than ever, you're experiencing certain issues that just aren't there on consoles.
I recall Jonathan Blow talking about how it's basically impossible to eliminate stutter on Windows now due to a number of design decisions in the OS driver system itself.
I'm wondering if this is the moment for Linux gaming. Valve has certainly taken it a long ways from where it was.
Linux has the same issue in principle, the problem is that shaders must be compiled anew for each driver version, at runtime. It is impractical for the developer, but a distributor like Steam can actually track and redistribute caches for most combinations of drivers and GPUs in active use. Also, warming up the cache (not just precompiling) is possible.
Steam is always compiling shaders before starting a game.
What is this needed for actually, can't these come precompiled with the game, just like the game's executable?
There are 3 brands of GPU so max 3 precompiled versions needed? Is that naive to assume this? CPU's need only one version mostly between AMD and Intel (or at most a few for different SIMD instructions)
Unlike low level SIMD instructions, GPUs work on higher level APIs: Vulkan, DirectX, OpenGL. How that gets interpreted by the GPU changes with driver updates. That's how Intel Arc GPU performance started off in a somewhat bad state yet are steadily improving. Changes in interpretation also mean changes in shaders, so you can't just take shaders on one version of a driver and use it on another.
Internal pipelines can vary between GPU architecture too. Currently AMD ships a different branch of their own Windows drivers for the new RDNA3 architecture GPUs (RX 7000 series). So precompiled shaders probably work won't between architectures.
But why do they change this per driver update? That sounds horrible to do!
CPU's are not changing their instructions based on driver updates either.
IMHO, time for CPU's to offer thousands-of-floats-wide SIMD instruction sets instead of the measily 512 bits, that can do the same as GPU's but are way more stable, and get rid of those annoying, version-of-driver-dependent, install-100-python-libraries-to-use-them GPU's.
Performance. Vendors will literally go in and hack the shaders for major titles. Also, drivers have bugs.
GPUs will never be outperformed by a CPU with wider SIMD. Intel learned that the hard way with Larrabee. They are on a different place on the pareto front.
Performance at the cost of stutter. So not really performance...
When did compiling of shaders become a thing? Nvidia GeForce had shaders in the year 2001 but I don't remember ever seeing this then. The first time I saw compiling of shaders happen was in 2017 in a console emulator I think.
So how did it work before then, and what exactly has this compiling now improved compared to before?
It only stutters the first time the shader is used, but modern games use a lot of shaders and they are more complex. In modern APIs it is not just shaders but a combination of shaders and state.
They can be precompiled, but not to the point where there is no stutter, because some optimizations only happen at the point where the shader is executed. That is why you need a warmup. You need to actually execute the shader. Theoretically with Vk/D3D12 you can cache and ship the entire state, but again, it is a combinatorial explosion of drivers and devices.
I think it’s not the moment for Linux gaming in general, but more so the moment for certain devices like the Steam Deck. Fixed hardware platform is what you need to precompile shaders and eliminate stutter. I would love for Valve to put out a box that sits under your TV with an APU on par with the PS5
Mesa (the Linux OpenGL driver) supports caching shaders on any GPU. And Vulkan essentially has shader caching built in at the specification level. I know that Steam ships precompiled Vulkan shaders, and while there's no equivalent for Mesa's shader cache yet, it shouldn't be a huge issue since it would only cause a short delay the first time the shader is loaded.
Playing Cyberpunk 2077 in Linux on Steam Proton currently. I've not seen any stutter whatsoever. I frame limit it to 60fps and it never dips below that, with most settings on the highest (though ray tracing is grayed out in settings despite supported Nvidia card so that's disabled).
There are hardly any slots on motherboards too now these days. One network card may fit next to the GPU. All kinds of limitations to nvme SSD speeds and GPU bus width may start applying just for doing that.
Exactly, as long as GPUs keep swelling larger and larger, pcie cards just aren't going to be practical.
It's crazy that X670E literally has multiple chipsets daisychained together ostensibly to increase expansion capability and yet there's less and less expansion available to end-users. And what there is, is almost entirely blocked by the GPU these days. I would love to see a board with tons of pcie slots attached to the chipset or PCIe switches, but, it wouldn't make the GPU go away for those users who want a discrete GPU.
The only real viable solution I see is thunderbolt, move as much as possible out of the case and break that expansion out as external ports. And tbh the PCIe add-in-card form factor just isn't working for GPUs anymore, the GPU needs to become a standardized module that is connected separately (via riser if desired) and it probably needs a 48V rail. The GPU being 3-4x the TDP of the CPU but completely dimensionally unstandardized and running crazy currents at 12V is not sustainable.
There are so many motherboards that fit exactly what your ideal state describes. They just start to get expensive.
As an example, I'm currently running 2 3090ti's on a MSI board in addition to 2 10GB PCI-Express cards, + 3x m.2 SSD's and I am not even close to fully utilizing the lanes OR other premium slots (like SATA)
If room is an issue, get a larger case or there are also offset/vertical mount options for things like GPU's.
Looks like the current max for any AM5 or LGA1700 board is 5 pcie x16 slots. And of course they won't all be full ones, they'll be some x4-in-x16 (which still is better than nothing, because slot width does affect electrical delivery capability of the slot). And using M.2 slots often eats up some of the pcie slots as well.
Looking at one example - Gigabyte B660 DS3H - you get 1 pcie 4.0x16 and the rest are PCIe 3.0x1-in-x16.
The promise of having these super-powerful chipsets has supposedly been that we can put more stuff on it. Same thing for pcie 5 - stuff doesn't really need it yet but you could put a bunch of slower stuff on switches and get a lot of the capability of HEDT out of a consumer board. That has not materialized, even a $700 consumer board is still pretty much intended for one gpu and maybe 2 nvme slots.
If you want a bunch of pcie expansion you still have to go to HEDT or server, but AMD killed HEDT and then lost interest and went outside to ride bikes. So now it's only server.
ROMED8-2T is the closest thing that exists to what I want, but, the promise of X670E and daisychained chipsets and superfast pcie is that I shouldn't have to buy an Epyc to get some decent number of real x16 slots, or at least x4-in-x16 or whatever.
Agreed, I don't mind spending extra money on GPUs (I have 2 3090s sitting next to me) because the improvement are still worthwhile for my use case, but the CPU prices have been unjustifiable, especially on the AMD side. Increasing CPU price, absurd motherboard price AND needing to buy new RAM, all for an improvement that isn't really too meaningful unless in very specific tasks, is not really worth it. I instead just got a 5900x for my computer and moved its 3900x into a server, retiring its 1600x (which was also sufficient for its work, although at least the 1600x was noticeably slower for transcoding, the 3900x is proving more than sufficient).
I don't keep up with PC component prices but I thought crypto crashing flooded the market with cheap cards? (Then again I guess BTC is back into the $20k's)
BTC hasn't been viable on GPUs for a while, either. It's the Ethereum Proof-of-Stake change that was the most exciting, but it doesn't seem to have had a significant effect, especially with newer (3-4yr old) cards.
I wonder how much of this increase is due to social media.
10-15 years ago, people would buy graphics cards to play the latest games on. Epeen was a thing, but limited to just some text in your forum signature.
Now, it seems like half the reason people buy any sort of "status" item is for the clout on Insta. It was eactly the same thing with the PS5 2 years ago - people clamouring for a useless object just to show other people they have it.
10-15 years ago people were posting their rigs, adding liquid cooling, ridiculous lighting, massive overclocking. It was definitely more than a forum signature.
I remember buying a voodoo 3 for 130 GBP. That was pretty high end at the time. Who's the target market for 1k+ cards?!
At one point you could get a decent gaming pc for about 500 or 600 GBP. I doubt that's possible now. And madness compared to the hardware in a Xbox series X. Yes I know it's hardware is subsidized.
Unless you your inet connection can sustain 10gb then what’s the point of a 10gb nic? I have gbit fiber that rarely gets above 6-700mbit. Is a full 10gb inet connection that common?
Even on LAN do you have I/O that can deliver 10gb/sec to the wire?
It doesn't have to deliver 10gb/sec to be worthwhile. 3 gbps is easy and already more than a 2.5GbE link can handle. And the prices for those switches still exceed what's available used for 10gbps.
I've got an ICX7250-24p at home that cost about $200 from ebay. That's 24 ports of PoE gigabit, plus 8 SFP+ ports for 10gig. Noise and power are quite reasonable, though it doesn't belong on the desk next to you.
Regular SATA SSDs are bottlenecked by gigabit and 2.5 gigabit. 116MB/sec is not slow per se. Usually only video editors are waiting on file loading bars frequently enough to make an upgrade worthwhile. That and moving around VM images.
I would have expected the crypto implosion to have a depressing effect on graphics card prices (certainly in the secondary market). Any theories why they remain elevated? Is it just supply chain stuff that everything is experiencing?
Reading this really points out to me how killer of a deal that GeforceNow is. I truly don't understand why more publishers don't allow their games on the platform.
It's not a killer deal for the consumer. That's just Nvidia fucking you over either way and still profiting off it. The market is not only bad in the high end, but there aren't any ~$200 value oriented graphics cards that provide significant upgrades over past generations. Just now the GTX 1060 has stepped down from being the most popular card in Steam's hardware survey just to be replaced by the GTX 1650, a much newer card that costs about the same and performs about the same. And with less VRAM!
There’s a lot of bad networks out there. I won a number of Fortnite battle royals testing this and it was rock solid 120 fps, barely any detectable latency at the 10 ms ping I was getting.
The lack of noise actually made me prefer it over my gaming box. It’s a real home run for people fortunate enough to have excellent network connections to nvidia’s data centres.
Having what feels like 100ms of input delay at a minimum is pretty awful. Idk if there's some magic way to speculatively render extra frames so that input is resolved locally
i would suspect that they either fear a loss of profit (b/c platform cut) or reputation (because latency/jitter from bad connection will be wrongly blamed on the game rather than the platform)
Quite a lot, at least based on my experience using Steam and playing various games I’ve bought for my actual gaming PC, but have builds available for multiple platforms.
As long as they’re 64 bit builds, MacOS’s Rosetta translation layer does a great job of running then without a hitch.
Apple Silicon is a beast. I wish more developers would take advantage of it.
This is only true if your primary game preference is casual games. The mac studio lacks a discrete graphics card. Ignoring the OS, the whole ARM vs x86, and Rosetta, wine, whatever stumbling blocks - just lack of a discrete GPU is enough to make it a no-go as a serious gaming rig. This isn't just me talking out of my neck either. I have 3 rigs I keep around my desk, and one of them is a mac studio. Great for dev and video editing, but for gaming not so much.
Edit: "Why do you need 3 computers?" - I regularly switch between Windows, Ubuntu & MacOS, and I don't like dinking with switching my monitor inputs, dual booting or remoting in. Rather just swivel my chair. Yes I fully realize how ridiculous this is. Some people like nice cars. I like wasting money on computers apparently. ¯\_(ツ)_/¯
It’s a lot more than “consumer electronics”. When I upgraded my i7 last year for a base model M1 chip, I found that all of my c++ code ran vastly more quickly on a machine that cost about half as much as what it replaced. What is strange is that if you look at the raw specs, it should not have been near half in execution time —- there’s definitely some fascinating things going on in that chip.
The unified memory architecture of Apple Silicon is particularly convenient for anyone building stuff using both a CPU and GPU.
I have no idea, in my long career I’ve never installed a computer game on anything. It might be surprising, but some people use computers for other things.
Good for you mate, but games were a main driver of hardware innovations both for CPUs and GPUs and were responsible for the adoption in the household of the PC and many people today still prefer them to the locked down console ecosystems because once your gaming session is over you can use that computing power for other things, like coding, rendering, etc.
The fact you never installed a game is largely irrelevant.
Hi Chuck, you mention hardware innovations but it's largely well-recognized that the deprecation of cross-platform GPU APIs is the main reason the mac is not known for gaming, and has nothing to do with whether or not it has a powerful hardware. It has, in fact, some of the most impressive innovations in recent memory, including unified memory and more.
The Mac Studio is one of the most powerful desktops currently on the market, and its integrated GPU competes quite well against many discrete GPUs. It's your comment and subsequent argument about gaming that is ultimately irrelevant on the topic of whether or not it is "powerful".
btw big fan of walker, texas ranger. Please do more like that one.
Many of the most important ML toolchains run natively on Apple Silicon, including PyTorch [0] and TensorFlow. For example the PyTorch folks have this to say about it [0]:
> Every Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance.
Plenty of other reports out there [1]:
>We ran a sweep of 8 different configurations of our training script and show that the Apple M1 offers impressive performance within reach of much more expensive and less energy efficient accelerators such as the Nvidia V100 for smaller architectures and datasets.
If one is looking for a great bang for the buck and a big savings in energy use, the GPUs in Apple Silicon are a compelling option. Plenty of Apple haters (presumably yourself) like to ignore what has been achieved with this technology, but that doesn't make it any less real.
> McCarron shines a glimmer of light in the wake of this gloom, reminding us that overall processor revenue was still higher in 2022 than any year before the 2020s began.
Suggests a correction precipitated by panic-buying during the supply chain chaos of the pandemic era. Too soon for doom and gloom for the PC market just yet. Mobile devices have been dominating PCs since long before 2020, and if revenues were still growing in the past decade then there's nothing to suggest that this moment is suddenly the inflection point where the whole thing will come tumbling down, even if you do believe that something like that is inevitable.
I agree with the sentiment, but mobile devices have also seen a plunge in sales. Many in the industry expected 2B units/year, but it maxed out around 1.6B. The last few years have seen volatility, between 1.2-1.4B. Last year was the worst since 2016, and the next worst was 2020.
Only Apple has been relatively flat, and probably only they and Samsung are very profitable.
Most of the profitability in "PC" (GPU/CPU) has really been datacenter for 5 years. Again, Intel executed badly, but the decision to focus on datacenter was right.
>This had to have been obvious given how a small company in the UK could build the Raspberry Pi.
According to wikipedia they use chips from Broadcom. I'm not sure how a small company being able to make SBCs using chips from a massive multinational is indicative of how easy it is for cloud vendors to "make their own chips".
The moment RPis were released it was obvious to me that eventually cloud providers/data centers would be using ARM chips due to Intel’s margins. At the scale of millions of chips it makes sense to save money on each chip.
> Yeah, everybody upgraded their WFH office setups in the prior two years, now no one needs a new pc. We’re going to be good for a while.
Also, it feels like phones have entered that "Core 2 Duo" PC stage where upgrades don't really matter as much any more. I know software support can still be an issue, but at least on the iPhone side, I don't feel like I need to upgrade before my phone loses OS support.
Exactly it’s the same with laptops. If you know anything about decent specs and don’t buy something with a ridiculous bottleneck and do a fresh install you can find a 5+ year old laptop that is amazing for anything other than extreme workloads. I mean like rendering scenes - not VSCode and Slack or something
Yeah I used a 2012 MacBook Pro 15" Retina until 2017 when the overheating and throttling of it in summer became just unbearable, upgraded again in quick succession due to the crappy keyboard, thermals and battery of the successors, and then finally settled on the 16" M1 Pro.
If the 2012 lasted me 5 years back then, this M1 Pro should last at least as long since the things that drove me to upgrade from the 2012 (mainly thermals) are not a problem with this machine.
I had a 2015 MBP that was falling apart and waited for the M2 refresh. I expect that I won’t get another laptop until 2030.
I have an iPhone 11 (fall 2019) and I see no reason to upgrade.
It does feel like we’ve hit a plateau. The only step up I can see would be on-device ML/“AI” models. Removing the latency and improving offline capabilities of something like Siri would open some doors.
Yeah. On-device ML. Cameras are also still improving YoY as more people abandon standalones. But while I'm usually on a 3 year cycle, I'd have to be convinced with this year's model.
And I may slide in a Mac Mini/Studio in place of my iMac at some point, I'm not really in a hurry in spite of being out of OS update support. It's basically a browser machine given I have a newish laptop.
Indeed. I'm currently stuck with 8GB RAM on my laptop and it feels like the OOM killer [1] is playing whack-a-mole with Firefox and any Electron apps. Start Firefox, then VS Code gets killed by the OOM killer. Start VS Code, then Slack gets killed by the OOM killer. Start Discord, then Firefox gets killed by the OOM killer.
It doesn't help that I'm firmly in the way-too-many-open-tabs camp. Even zswap doesn't really help.
[1] Actually systemd-oomd to be precise, but let's not start the userspace OOM killer debate again in here
Great way to put it. My current phone feels as useful now as the day I got it, whereas previous smartphones started to feel sluggish as apps and sites got slower and it felt outclassed by newer cameras.
COVID/WFH Panic Purchasing for work and school at home.
COVID Cash also put money into peoples hands to buy stuff, like computers.
Crypto mining and GPU shortage was also a factor. As people were buying systems and parts for speculation. People were buying prebuilt computers to mine or simply get the GPU.
Scalpers made everything worse, messing with parts in the supply chain.
So there is the supply and demand factors, and the extra money for consumers and the speculation for crypto, and it was a perfect storm.
The prices are simply too high for the marginal benefit they offer.
Marginal costs outweigh the benefits so why would people buy? This is simple economics and they know this, but they still price fix because they must meet their minimum profits whatever that may be.
Its a common problem with monopolies, as soon as the market place shrinks to only a few players, where the means of production has been concentrated, those people then start dictating prices and may collude without even needing some conspiratorial agreement.
Many people also ignore the fact that Intel ME/AMT and the AMD equivalent features that cannot be disabled, are not documented, and are primary targets; are becoming more well known, and in general people don't want it.
Businesses may find value in those features, but individuals find cost (i.e. their privacy, and greater future risks that are unquantifiable).
They've broken their own market, and the rot will only get worse for them since its unlikely they will right the course. Many IT people wonder if there isn't some secret government mandate requiring these companies to embed management co-processors. It is clearly offering only minimal value to IT, and its seen as a cost for individuals that know about it.
They really need to reconsider their actual market instead of the fairy magic kingdom type thinking they have been following.
You're putting the fact that modern computers have AMT up with Covid, supply shortages, and Crypto crashes in terms of sales loss???...????? You really, really need to get out of whatever bubble you're in.
AMD ST (formerly PSP) and Intel CSME (formerly ME, not the same as AMT) are the only reason that I, and 3 close friends off the top of my head, are completely disinterested in buying any new x86 CPUs.
All 4 of us work in big tech companies, have 6 figure take home pay even after expenses. I don't mind paying high prices due to shortages. Lockdowns were BS and I was continuing to eat out regularly at restaurants from September 2020 to present in a state with lax mask requirements. Only 1 of the 3 friends lost any money in crypto, and it was a very small amount compared to his annual income.
Just because you and your social sphere don't fully appreciate the privacy implications of CSME and ST, which again, aren't the same as AMT, doesn't mean nobody else cares about them. Have you considered that it might be you who is in the bubble?
> Just because you and your social sphere don't fully appreciate the privacy implications of CSME and ST, which again, aren't the same as AMT, doesn't mean nobody else cares about them. Have you considered that it might be you who is in the bubble?
I worked in a computer store for four years. I probably helped thousands of customers in that period.
None of these concerns ever came up from what I can recall. Not even once. People ask about all sorts of things and I've answered my share of questions about Linux and even FreeBSD, but never this.
Are there some people concerned about this to the point they don't purchase x86 chips? Obviously yes. Is it a broadly held concern? Doesn't seem like it.
The types of people who hold this concern are categorically unlikely to be the kind of people to ask for help (or accept unsolicited help) from staff at a consumer computer store - they are much more likely to be the type of people who can almost certainly run circles around any consumer technical support or consumer sales staff they ever interact with, in just about every technical domain, and the type who deliberately avoid talking with the staff, as the staff are almost certainly unfamiliar, inexperienced with, and unqualified to address their niche concerns in the first place.
I'd never ask a Microcenter employee whether AMD or Nvidia cards are going to have better pass-through support on the Xen hypervisor to a BSD guest kernel with any expectation of a knowledgeable response, would you?
Would you ask a Best Buy employee whether the session tokens used by a wireless router's web interface are generated using a PRNG or a CSPRNG and expect them to be helpful?
We had a wide variety of customers on account of being a local business with a long history in the area and a good reputation. This included many tech people and business customers. I have no way of measuring how representative it was exactly, but it's more representative than you might think. Most people don't work in IT and don't want to run Xen in the first place, so it's still fairly indicative what "the average person" thinks – certainly more indicative than you and your 3 friends who work in IT.
As I discussed in another comment (https://news.ycombinator.com/item?id=34775479), what the "average person" wants and cares about is an objectively bad metric for anyone who cares about digital privacy when making security/privacy relevant decisions. The "average person" is functionally illiterate on matters of digital security and privacy.
This thread is about declining CPU shipments, and Intel ME is offered as an explanation for that. It's extremely relevant in this context what "the average person" thinks because if they don't care then it doesn't affect sales.
I'm certainly not arguing that it's the only reason, or even a primary reason, but it absolutely is a reason. We can debate the extent to which sales decline is attributable to this all day long, but fact of the matter is, x86 CPU manufacturers are hurting right now, and I'm sharing an option for anyone at those companies who might be reading that can alleviate some of that pain. Like I've said in other comments, if I could get a 7950x without AMD ST (formerly PSP), I'd eagerly pay $2500 for it, and I definitely wouldn't be the only one.
Your social sphere is tiny, to be quite frank. The market we're talking about is tens of millions to billions of devices. You and your four friends, or your ten friends, or your thousand or even hundred thousand friends are largely not relevant. The vast majority of people using computers that have these processors in them would probably give you a very puzzled look if you even asked them what CSME or ST are.
This is not my entire social sphere, nor is my argument that my social sphere alone has influence, it's just an example (admittedly a small one that isn't representative of the general population, see other comments) of a larger trend. Please also read my other comments where I discuss Google wanting ME-free CPUs for their servers as part of an effort to remove all proprietary blobs that was started in 2017. There is enormous institutional demand that singlehandedly justifies this offering, even if you entirely disregard the very real consumer demand.
The vast majority of people cannot cover a $500 emergency expense and fall for extremely basic phishing attempts - what the "average" person wants is an objectively bad metric to strive for or to base reasonable expectations upon. The "average" person is a fool that struggles to cover the lowest levels of their hierarchy of needs, to say nothing of abstract concepts like digital privacy.
Would you accept what the average person has for your financial situation? Your education? Your level of personal responsibility? I sure as hell wouldn't. Why should we factor the ignorant opinions of the tech-illiterate regarding digital privacy as a baseline for what ought to be?
I know this comes across as harsh, but that's reality. Consumers these days by and large don't even understand what data is being collected about them, let alone how the data collected from them can be used against them: https://www.nytimes.com/2023/02/07/technology/online-privacy...
I understand that n=4 is a small sample, and it is not representative of the general population, but you're strawmanning my argument if you're suggesting my point is that the four of us alone affect the market, I'm suggesting a larger trend.
I don't know about anyone else but I've been patiently waiting for the Ryzen 9 7950X3D since the 5800X3D came out. The gaming performance on that chip was so good that it was competitive with more expensive chips at the time, despite being slower for productivity workloads. My 4790k is starting to show it's age when playing games like Rimworld and Elden Ring.
I'm waiting for benchmarks. So far I'm not convinced 7950X3D will be better than 7950X, especially since there is no way scheduler will be able tell whether some thread benefits from more cache or from higher clocks, unless someone develops a very sophisticated one with AI like training capabilities? I haven't seen any kind of efforts of that sort (I'm gaming on Linux).
Pretty marginal benefits for many workloads. And it's not directly comparable, because 5800X3D provides more cache for all cores. 7950X3D provides more cache only for half the cores. That creates a weird hybrid CPU. Half the cores have higher clocks and less cache, and half the cores have lower clocks and more cache.
Your threads will be scheduled all over the place pretty randomly. So I'm not yet convinced it's going to be better than stock 7950X with higher clocks across the board. Actual benchmarks will tell.
Poorly coded? RimWorld handles hundreds of mods mostly gracefully. Combine it with rimpy GPU compiling textures and you shouldn't have fps issues unless your computer is a potato or you have tons of VRAM heavy mods.
PCs sales aren't tied to GPU sales, though. Gamers switch GPUs way more often than CPUs (which makes sense, since CPUs often aren't the bottleneck).
PCs have simply become fast enough over the past 10 years and there's been no "Windows Vista"-moment that forced users to migrate to new hardware en masse. A 5 year old system will simply feel pretty much the same to the average user as a brand spanking new one. There's only so many video editors and professional gamers/streamers who thirst after the latest hardware.
> Most of the downturn in shipments is blamed on excess inventory shipping in prior quarters impacting current sales.
This is in line with comments from Drew Prairie (AMD’s VP of communications): [1]
> We are shipping below consumption because there is too much inventory in the channel and that partners want to carry lower levels of inventory based on the demand they are seeing
i recently built my first pc and moved my daily driver from being a thinkpad to a custom desktop.
i was already sitting at a desk, so ergonomically it’s identical.
now i can compile blender in 20 seconds and fly around with eevee. i can compile linux with custom lsm modules.
dual ssd makes it easy to dual boot. reboot into windows and i can have a magical evening.
7950x, 4090, 990 pro. it would be great if these were cheaper, then more people could afford to use them. it’s also ok that they are overpriced. cest la vie.
to anyone spending a majority of their life on a computer and making money, the cost of your primary computer doesn’t matter unless it’s ludicrous.
the opportunity cost is far higher. what might you have learned had you tinkered with blender or a kernel when you were bored?
Lowball speculation lacking any semblance of supporting data aside, remember that this doesn't require a new product line, or any new features, just removing an existing component that costs more to add in, replacing owner-controlled CPU core initialization with the code that performed that prior to the introduction of ME.
Not to mention that both team blue and team red could charge something of a fortune of a "privacy premium" for not including these much-despised coprocessors. Consider one of the only remotely comparable semi-modern options - IBM's Power 9 processors. Midrange versions of these processors and their motherboards each start in the thousands of dollars. For a Ryzen 9 7950x with no AMT ST (PSP) or Microsoft Pluton, I'd eagerly jump at the opportunity to purchase one for $2500.
Back to your estimate - even for consumer demand, your estimate is easily dismissed by simply looking at the existence of companies like RaptorCS, Purism, and the like. In addition, there is enterprise purchasing demand in the hundreds of millions, easily. Google was attempting to remove all proprietary blobs from their servers a few years ago (2017, if memory serves correctly), and ME/PSP was the big barrier they couldn't overcome at the time.
Make no mistake, there isn't a single tech company that wouldn't leap on such coprocessor-free compute to protect sensitive corporate secrets.
Firstly, a response with no supporting data is befitting of a post with no supporting data, no?
> this doesn't require a new product line, or any new features, just removing an existing component that costs more to add in
That seems wrong. Even having a new SKU has some overhead in terms of any manual process (such as making product brochures). Plus, are they going to silently make this hypothetical ME-less CPU, or are they going to try to do some outreach to interested buyers? If not, what's the point? That costs money too. Oh, and they'll need a separate line for packaging these CPUs. And lastly, adding a post-process way to remove ME is potentially a security issue, because maybe people in high-security environments (who likely pay more than people such as yourself who want ME-less) would be worried about an attacker slipping a ME-less CPU in and doing something nefarious.
Anyway, TL;DR: it's not enough to prove that there are people interested in a ME-less CPU. Heck, if it's $1, I'll buy one for the hell of it. It's a question of how much more they're willing to pay, and how much more it'll cost Intel to deliver such a CPU. And we have precious little info to go on, so your argument is hardly a slam-dunk.
Edit:
> there isn't a single tech company that wouldn't leap on such coprocessor-free compute to protect sensitive corporate secrets
Really? Corporations are worried about ME? First I've heard of it. These are the same companies that willingly use Azure and AWS, right? Can you provide some evidence?
Google will continue buying Intel CPUs with or without the ME. Their trying to remove it is irrelevant, Google also tried to create a social network and whatever Google Buzz was. They have a lot of pointless projects, in other words.
> Sincerely, - A larger chunk of your potential customers than you think
I wish this were true, but in reality it's pretty far from that. What do you count the tens or hundreds of thousands of potential customers amongst the sea of tens or hundreds or thousands of millions of real customers? A rounding error.
A huge chunk of their market _wants_ these features, another huge chunk just doesn't care, and there's a small (and at times very passionately vocal) minority that cares and doesn't want them.
But this is all setting up for a response to:
> Until then, I will never buy a new x86 processor ever again.
Don't hold your breath; It's not likely you'll ever have the opportunity to spend your money on x86 with the given conditions.
I've addressed this argument further down in the comments. TL;DR: Google wants servers with no Intel (CS)ME. If you don't think other big tech corporations feel the same way, or that the collective purchasing power of a few of the world's big tech companies' server budgets is a significant amount of money, I don't know what will convince you.
There may only be tens or hundreds of thousands of customers, but tens of those customers are going to want hundreds of thousands of individual units. I have no doubt annual sales for modern X86 silicon without the "security" coprocessors would be in the millions of units for the first few years, at a minimum.
I think their processors are not including management engine, so you are safe to buy one. The management engine that included in chipsets can be switched off permanently.
In general usage, it does not matter while you use third party controlled CAs, distro repositories and automatic updates, not speaking about microsoft, google, nvidia, valve, mozilla spyware that can do anything with your data anytime they (or US/EU government agencies) want.
>I think their processors are not including management engine, so you are safe to buy one.
There are no new AMD or Intel processors that come without ST (formerly PSP) or CSME (formerly ME).
>The management engine that included in chipsets can be switched off permanently.
This is factually incorrect. me_cleaner cannot neutralize or disable modern CSME, there is no way to verify the HAP bit does anything at all, nor that the included TCP/IP stack on the Minix OS cannot accept remote commands to disable the HAP bit, if set. To our current knowledge, only the onboard GbE controller is accessible to CSME's TCP/IP stack, but we're working with extremely limited information. These are closed-source, hardened opaque-boxes that are deliberately designed to be inauditable and tamper-proof. Adding firmware support for other ethernet controllers or wireless cards would conceivably be trivial.
>In general usage, it does not matter while you use third party controlled CAs, distro repositories and automatic updates,
I compile from source. OS, drivers, browser - all of it. I don't care if you think this is "unrealistic for the average user", my objective is not to have the security model that the average user has.
>not speaking about microsoft, google, nvidia, valve, mozilla spyware that can do anything with your data anytime they (or US/EU government agencies) want.
I do not run Windows, I do not use chromium (or firefox) based browsers, I do not use a discrete GPU, I don't have anything remotely gaming related (like steam) installed.
What I do have is a constitutional right to privacy that does not end where my CPU begins, and an unshakeable resolve wherein I refuse to voluntarily cede that right to privacy just because so many others do.
What I do have is a constitutional right to privacy that does not end where my CPU begins, and an unshakeable resolve wherein I refuse to voluntarily cede that right to privacy just because so many others do.
Anger might help, if channeled properly into lobbying your representatives in Congress. Making up imaginary constitutional rights to a DRM-free PC won't help at all, though. Intel and AMD have the right to shove their spyware into their silicon, just as Microsoft has the right to shove theirs into their OS. You have the right to decline to buy it. Your rights end there, given that nothing they are doing is actually illegal.
That last part could change, which is why I recommend lobbying. It should be completely illegal to use a Wintel PC for a vast number of things that people are currently using them for, from healthcare to government services to military applications. If we can convince Congress of the threat, they can pass legislation that will wreck the business model of anyone who doesn't give the user -- or at least the admin -- control over what information the PC sends out and what it can receive. They will change their tune in a hurry when that happens.
That sounds good yet in practice even medical hippa privacy is bunk. Last week I went to a big hospital for a walk-in xray. They refused to take the pictures until I consented to their standard forms which provide for my results to be given to unnamed research groups. The check-in person acted like I was the first person ever to try to strike out that part of the form/contract. They literally refused treatment and the 'patient advocate' played her role of being pleasant and clueless.
I complained to hospital licensing at State which was rejected because it was not an unsafe care issue.
Not completely related to privacy, but I do know of one state-level initiative to enshrine into law a requirement that the state largely end the requirement of proprietary software usage to interact with the state's various digital interfaces - "prohibiting, with limited exceptions, state agencies from requiring use of proprietary software in interactions with the public" - HB 617-FN in New Hampshire.
It's not a one-and-done solution but it's a big step in the right direction for government, especially for digital privacy.
>Intel and AMD have the right to shove their spyware into their silicon
Correct.
>just as Microsoft has the right to shove theirs into their OS
Correct.
>You have the right to decline to buy it.
Ding ding ding!
>Your rights end there, given that nothing they are doing is actually illegal.
Correct. I am making a market demand with my money, not a legal order for these companies to stop producing untrustworthy hardware and snoopy software.
What hardware setup do you use that you feel secure on? In my own searching I haven't found any off the shelf SoCs that have meaningfully more secure architectures. You either have IME or a garbage ARM based SoC that doesn't have an SMMU and forces you to fully trust your wifi card not to scribble over kernel memory. Most vendors really just don't care about system security. Maybe you should look into running your computer off an Ultrascale FPGA :)
Currently, my main workstation uses a Power 9 processor made by IBM, which is definitely much more expensive and much slower than modern Intel & AMD processors, but comes with the privilege of having a completely open ISA, open hardware schematics, and 100% open source firmware & microcode for the CPU itself.
I am not a big fan of ARM as many ARM chips have a TrustZone core, which is in the same camp as (CS)ME and AMD ST (PSP).
Fun fact: AMD ST (PSP) is actually implemented using an ARM TrustZone core.
I have airgapped boxes, but in general, I don't mind connecting to sites that aren't inherently trustworthy as I do not have javascript enabled by default, I enable it on a script-by-script basis. Is it possible someone has an n-day in the HTML rendering engine of my browser? Sure, which is why all of my activity is compartmentalized and isolated by way of virtualization. I'd love to run Qubes on my Power 9 workstation but Xen and Power 9 don't play nicely at the moment :(
Right now, I use a Power 9 (PPC64 arch) processor made by IBM for my main workstation. It is 100% open source - every bit of the firmware, and hardware schematics too. I have a few old laptops with Intel and AMD processors that predate the age of ME and PSP, respectively, but they are not powerful enough for running multiple VMs, background services, a few dozen browser tabs, streaming and decoding video over SMB, etc like I do on my workstation.
No, but the concern with Intel and AMD security coprocessors isn't a hardware supply chain attack (like what NSA's TAO unit does with hardware interdiction), it's with a component that we can reasonably assume to be untrustworthy that is guaranteed to be present in every CPU as a "feature".
We cannot read the source of CSME or AMD ST firmware. We cannot inspect what it is doing.
I can read the source of all firmware for all parts of my CPU. I can inspect what it is doing.
This is what the benefit really boils down to - it's not a guarantee of security, but it is a guarantee you can actually audit what your silicon is doing.
A CPU a few years old is still good enough. If you have a limited budget and want the most bang for your buck, then stick with your old CPU and get a new GPU instead.
Depending on what you’re doing - unless you need to game with slightly better FPS there’s nothing a gaming PC/laptop from a few years ago can’t completely crush still
I was thinking more about playing with new AI toys. TBH a 10 year old CPU and GPU are still fine for most games, latest AAA releases perhaps notwithstanding.
I had the same CPU from 2019 until a few weeks ago. Upgraded to a 5700X (electricity is expensive around here, so the lower wattage parts are appealing); with more aggressive memory timing got about 20% improvement in video encoding (main cpu-bound task I do regularly). With selling the 3700X net cost for upgrade was around $100- not a bad deal, even though I stuck with the stock cooler! This is after I have also doubled the RAM from the original build to 32G for ~$60 last October. I expect to get another 3-4 years of of this AM4/DDR4 rig before big overhaul.
Decades ago I did similar things, but the cadence was much faster (annual sometimes); my conclusion, like that of many others, is that PCs are usable for much longer these days. A net positive I believe.
100% this. If my motherboard hadn't started to slowly fail (been running on the backup BIOS for the past year), I'd still use my 2013 Xeon system. It's still fast enough for daily use and as long as I'm not recompiling large codebases, it's still fine today.
I plan on using my new rig for at least 6 to 8 years as well. The times when CPU performance literally doubled every other year are simply over. For the average user, literally any mainstream CPU released in the past 5 to 8 years will actually do just fine (stuff like high-end gaming and video editing aside).
2700X/RTX2080 - at the time I thought the 2080 was over priced but then things went mental and it looked like a solid buy.
I want to upgrade (held off while buying a house and moving/renovating) now I finally can...I realised the only game I play a lot isn't even capped by my current hardware so there just is no incentive to upgrade.
10 years old CPU (i7 4790) is still a decent and fully usable for everything except few unoptimised AAA games and jetbraing IDEs. It is performance equivalent to late 2021 mobile i5 10300h.
The gap between different CPUs lines is huge. Back in 2020 you could buy a new celeron n3060 laptop with performance of pentium dual core from 2006. It was a straight e-waste, but people actually did something on it.
That era marked the lowest level of competition in CPU performance and it really showed in terms of how relatively lame an upgrade with the leader of the time (Intel) was. With AMD's products being competitive and ARM CPUs no longer being relegated to smartphone class performance there is real competition again. Given that and the historical tendency that software grows to use the available hardware I wouldn't bank on every CPU upgrade lasting as long as they did in that period.
But damn if it hasn't been hard to get a good deal on a GPU these last couple of years...
i like how people now use $1k for the price of a top-end gpu now when not too long ago it used to be half that.
if you are starting from zero today, then it seems very rough indeed. but for a vast majority, minus the bloated applications people can get away with today, most people are just fine with what they already got.
in terms of gamers, i am amazed by one group excited by steam deck and happy to live with its limitations, while another group wants the bleeding edge and demanding 4080/4090 performance for much less. in a day and age where even cyberpunk can be played on integrated graphics, we still stress on having a graphics card.
for ai or other "production" workloads, unless you're starting from scratch, things remain as they had in the past: you pay more to save time. i personally chose to skip tensor cores for my machine to save on costs, but it should not be holding me back when i really need to train a large model. i am not sure how many people would actually be using their hardware for long enough to justify the "10 month" rule on buying v/s renting.
as far as the main topic is concerned, cpu has become way too good way too quickly. only a few years ago we just had minor bumps on 8 threads for the consumer space. fortunately bloat hasn't advanced as much.
And remember it's not like it actually needs any of it. There's a few checks of the form "if(doesntHaveTpm2) refuseToWork();" in the code, but people have hacked around those and then everything worked fine still.
The best chip production is going to smartphones. PC processors drain my battery and are slow compared to a GPU. So why would I spend money on a new PC or new PC processor?
Strongly recommend looking at used Dell OptiPlex Micros. I have 2 5060Ms, 35W Core i3 8th gen, 2 RAM slots (8GB commonly included), NVMe slot (256GB commonly included), SATA slot, RTC, for about $150. Considering Raspberry Pi prices and the absurdly greater performance, it was a no-brainer for me.
Most people are buying rpis because they can throw some boot image on it made for the rpi and have things just work. A regular computer, no matter how much better in performance, doesn't just work.
For example: you can't just throw a Birdnet-Pi (https://birdnetpi.com/) image on a normal PC and run it. There is plain Birdnet but it doesn't have any of the automation or the web interface for use Birdnet-Pi does. Instead it's pages worth of complex multi-argument commands you'd have to customize and you'd have to port the web interface yourself.
Since most people want rpis for little projects like this a normal PC massively increases the complexity.
They always are. Why do you need them? Their high prices are probably because of Broadcom though, which is also why they made their own chip. It may never be cheap again.
Is AMD completely out of the arm chip game? I know they had interest a few years back but seemed to abandon it. I’d really like an option to buy an arm cpu and motherboard from someone who will support it. Basically something in between a rpi and a MacBook. 400-500 with upgradable ram, storage, and gpu.
At least for now AMD doesn't have anything ARM. There are ARM systems available but nothing anywhere close to that price range.
The Honeycomb LX2 is probably the closest thing to that currently, but it launched for $750 and has since gone up to around $920. Performance is not remotely competitive with x86 systems at that price point.
There are some systems based on the Ampere Altra chips, but nobody sells the motherboards/CPUs on their own and a full system will run you at least ~$6000.
I’ve been so far out of it for so long I have no real way to interpret those numbers. 13600 is almost double 7700 so that’s obviously good but is i5 noticeably inferior to i7?
Intel’s naming convention on consumer parts is relatively easy to grok, as it’s been much more consistent for the last 12 years than their server/workstation parts.
Breaking both down we have 3 things to take note of:
i5/i7 - indicates relative performance or feature set within a given generation, bigger is generally better
13/7 - the generation of processor
600/700 - where Intel rates a given processor within a generation, this is consistent and doesn’t (to my knowledge) involve overlap between i3/5/7/9 - generally bigger is better.
So i5-13600 is a thirteenth gen i5, type 600. i7-7700 is a seventh gen i7, type 700
Then you get the legion of letter suffixes determining other features, mobile SKUs etc.
Also, “gens” are announcing/releasing during every year, starting 2010, so 7700 was released in 2016 and 13600 at the end of last year.
Performance gains are shifting unevenly sometimes there is marginal gains, sometimes (when AMD bites) it is much more. For example i7 2017 is equivalent of i3 from 2020.
Recently, and especially with this gen, the line between i5 and i7 is coming down to the amount of cores itself not the single core peformance. Basically 14 Cores, 20 Threads @3.5GHz vs 16 Cores, 24 Threads @3.4GHz.
What used to be the i7 was (like with the i7-7700) now occupied by the i9 line
I like to use Passmark as a very rough comparison for CPUs. Emphasis on rough, but probably grounded in reality. Whether the user can utilize such performance, or if they have a specific workload that isn't ideally multithreaded, is critical.
7700 is 7th generation, 13600 is 13th generation, so they are about 6 years apart. The 600 and 700 tells you something about how they are positioned within their generation (bigger equals better). i5 vs i7 is a difficult topic. i7 generally has more hardware features enabled and has the higher end models. But unless you want specific features it's not so big of a deal.
Looking up the numbers instead of reading them like tarot cards, I can tell you that the i7 7700 is a 8 core processor with a max turbo frequency of 4.20 GHz, and the i5 13600 is a 14 core with 6 cores that can do 5 GHz, and 8 efficiency cores capped at 3.7GHz. And both support VT-d, which is about the only feature I would care about :D
i7 and i5 are just "market segment" identifiers. Even in the first few generations, there were "i7" cpus that were inferior to "i5" ones.
In general, you can only infer very little from the marketing names. To have any actual idea, you have to look at benchmarks. In practice, 13600 is ~70% faster per thread, and has 2.5 times the threads.
I just the same upgrade myself, though from a 7700K. Really I can't say it's been that much of an improvement in day to day, and I expect to wait even longer before my next upgrade.
It used to be you could buy a lot of computer for $2-3K, now that figure is closer to $5K. These prices, combined with the folks that just went through this pain 2 years ago during the pandemic and yeah you aren't going to see stuff flying off the shelves any time soon.