First, this already happened to some extent. Outside of drives and fans, the virtually all of the power used in your system is exclusively 12v; either from the 12v pins on your ATX24 or P4 or EPS12 connectors, or via the PCI-E connectors; very little non-drive or non-fan power is supplied via 3.3 or 5v rails.
Second, almost all PSUs today use 12v DC to 3.3/5v DC converters to increase efficiency given how little 3.3v and 5v is required. The PSU itself is often designed internally as a high efficiency multi-module single rail 12v-only design.
Third, Intel is not in a position to dictate anything. Intel has tried to float many changes in the standards (such as the WTX board standard (with matching incompatible mobo plug), and the BTX board standard (using ATX plugs)), and all failed. The only thing that stuck was using 4 and 8 pin PC Minifit Jr plugs (the same ones ATX, PCI-E and countless other plugs come from); and, annoyingly, somehow not compatible with PCI-E's pinout.
Fourth, Intel is also not the dominant player in the field now, AMD is. They failed to get major changes through as the dominant player, what makes them think anyone cares to hear what they have to say today? It is unlikely AMD will ever ship machines that don't ship with normal ATX unless they just jettison the ATX24 altogether.
Side note: 12v capacity: ATX24 144w, P4 192w, EPS12 336w, PCI-E 6 75w, PCI-E 8 150w, and the PCI-E slot itself 75w.
A modern desktop could theoretically be ran off a dual EPS12 and a PCI-E 8+8 for the GPU; modern mobo designs run the CPU and RAM VRMs off the EPS12 entirely, modern GPU designs run the GPU VRMs off of PCI-E plugs and the VRAM VRMs off the PCI-E slot power, very little power of any voltage is supplied to the system via ATX24.
Another side note: Any board that has a legitimately clean 5v rail for USB is not feeding it off the PSU directly. Any board that has absolutely garbage USB power probably committed this sin.
I agree it's delusional, but to give some hard data, Mercury Research published its quarterly findings on x86 market share as of Q4 2019[0]. Specifically:
- On the server side, AMD market share is only 4.5% (up by 1.4 percentage point from last year)
- For desktop, market share is 18.3%, up by 2.4 pp from last year.
- For laptop, market share is 16.2%, up by 4.0 pp from last year.
- For overall x86 chips, market share is 15.5%, up by 3.2 pp from last year.
So it's pretty clear that AMD isn't quite leading the market, although there is an overall upward trend in market share, and the rate at which AMD is gaining market share seems to be on the rise. Maybe in 4-5 years, AMD will be the dominant player. But it's still too early to call them this.
> Maybe in 4-5 years, AMD will be the dominant player. But it's still too early to call them this.
Highly unlikely. This is AMD's prime time because Intel screwed up, and they've peaked at 15%. I predict Intel to gain the pace in a few years and dominate the market once again.
Perhaps but I'm not so sure we may see things play out in AMD's favor since they are fabless but Intel remains shackled to ever more costly and difficult fabrication. Either way they did hire Jim Keller and I am excited to see what he can do with their resources and technology.
I disagree. Intel’s capacity needs are huge, to the point where the benefits of outsourcing just can’t happen. Also, it’s not a good idea to rely on others, because they can screw up. Remember 28nm and the graphics fiasco?
Sure, the costs will be high and there will be some risks of an in-house fabs but it’s Intel. They’re better off with it.
TSMC has a new process called N7P EUV that has a higher density than Intel's 10nm process[0], in production since last year. They also are on track to have their N5 process in production next year, which is supposed to have an even higher density[1]. Obviously, N5 isn't in prod yet so this is just a roadmap, maybe they'll hit roadblocks like intel here.
But still, it's disingenuous to claim that Intel is trying something that wasn't achieved by TSMC already. TSMC has already beaten Intel on transistor density fair and square.
And you can't compare tsmc vs Intel directly because their target chip size is different. Intel produces chips size of avg. 180mm² while tsmc and others aim for under 80mm². Even the industry leader producing small chips are suffering in yield for ranges of 113MTr/mm². Intel producing huge chips with 100.76MTr/mm² density even without EUV means they are going through the right way.
People having insight to this industry doesn't think Intel has ever been left behind in this race towards density. Fanboys and regular people influenced by the media does.
Sure, market share changes slowly as people keep their PCs for more than 5 years (remember that Ryzen was released 3 years ago and only Ryzen 3000 really beats Intel).
I think that’s a German hobby computer parts retailer. Won’t data centre sales dwarf that? I’m not sure you can use that as really representative of the market.
> Intel is also not the dominant player in the field now, AMD is.
Only dominant in enthusiast land. ~20% of desktop but their server marketshare is still in the 5-10% range, with many analysts estimating below 5%. (Numbers from rumors of Mercury's next report, to be released soon). Likely to blow past Intel after the next hardware cycle. I'd guess 2021-22, since epyc zen2 launched in the end of 2019. Big players plan hardware on a multi year schedule (oem, vps/hosting).
As someone who reads a lot of PC specifications in detail, I can assure you, Intel is in an excellent position to dictate PC specifications as it routinely does so both independently and as a very active member of boards that do so.
Yeah, this is just rearranging some things to avoid California's new efficiency regulations. If you move the inefficent stuff to the motherboard, the PSU magically gets more efficient!
I don't see how regulating power draw from the wall could work for anything other than idle, because PCs very widely in power requirements and performance. Regulating efficiency makes more sense because it doesn't automatically give low-end machines a pass or ban high-end machines.
Fujitsu have been building PCs for a number of years with proprietary 12v-only PSUs, with SATA power being provided from an auxiliary connector on the motherboard.
Basically every OEM small PC (everything below SFF from Lenovo, HP, Dell, Fujitsu, etc. which do not accept PCIe extension cards and are pretty much built from laptop parts) is now powered from laptop brick PSUs. Which means everything inside the desktop is derived off of a single 19-20V DC line. SFFs still use internal PSU but they're all custom and while I haven't opened up one recently I'm sure they dropped as many of the rails as possible since they have full control of the MoBo/PSU design. The only PCs that use ATX PSUs are full towers in general.
Modularized redundant server PSUs are usually built this way: the modules are 12v only, and dumb as bricks, with the 3.3v and 5v supplied by the PSU's backplane; unfortunately, this leads to server failure because of backplane failure.
Some SAS backplanes for cases that fit a bunch of drives sometimes have their own 5v VRMs and take only a standard 12v connector of some sort. Helps when PSUs typically don't supply enough 5v to feed a few dozen drives, but supply more than enough 12v.
Based on my experience selling USB 3.0 PCIe cards, the ones that supply 5.0V directly off the PSU are more likely to maintain voltage at higher currents than ones with an on-board buck circuit. And it's one less part that can fail. I have not seen much in the way of stability issues either. The 5V from the PSU is clean enough and voltage drop is minimal at these currents over the standard 18 AWG PSU wires.
Generally the ripple is audible on USB-powered audio devices that don't use isolated 5v rails.
The ones that don't have audible issues either have their own VRMs on the boards or they have a sufficiently isolated 5v rail but still use PSU power. A good board should not have issues even under extreme cases. Neither solution is particularly more expensive than the other.
If I need to maintain voltage without droop under high load, I probably should be looking at legitimate chargers, not something powered off the computer.
>Outside of drives and fans, the virtually all of the power used in your system is exclusively 12v; either from the 12v pins on your ATX24 or P4 or EPS12 connectors, or via the PCI-E connectors; very little non-drive or non-fan power is supplied via 3.3 or 5v rails.
I'm curious which fans don't use 12v - all the fans I know of in a typical computer operate on 12v except perhaps GPU fans. Occasionally I'll run a fan on 5v if I need it to be quieter and I don't have any PWM headers available, but that's about it.
Some systems control fans via voltage (the old way of doing it), instead of modern PWM (which has been the standard for about a decade); and even with PWM, it should be viewed as an average of the voltage instead of merely as the peak voltage during pulses.
No matter how you end up controlling your fans, you're not giving them straight unfettered 12v, either you're controlling them via voltage (and they spend most of their time at around 5-7v), or you're controlling them via PWM (with a significantly reduced load cycle).
Some ultralight laptops have switched to 5v for their fans, but that does not seem to be any sort of standard. I have not seen a GPU that uses 5v for fans, and by the time GPUs needed big enough fans to have to control the speed on, they were exclusively PWM, and I am not aware of a GPU that used voltage control on it's fan.
I love computers with 12VDC native power jacks. Why? My solar-powered van (interior only, the engine is still diesel) is 12VDC everywhere, and that's good clean battery DC, not some noisy-ass AC/DC conversion.
I love it so much that 2 of the last 3 computers I built (on an ATX form factor) were 12VDC native. I have external AC/DC power supplies for them, but when travelling/living in a solar-powered environment, I get to just plug them straight into the clean juice.
Sadly, monitors used to be this way too - they came with external power bricks and were 12VDC native. But sometime in the mid-2000's, something changed. My guess is that someone figured out how to shrink the necessary converters to a size that made it more "sensible" to put them inside the monitor. Result: almost impossible now to find a monitor you connect to a 12VDC supply. The one I have in the van took me months to find on ebay back in 2015. The only exception are "TV" monitors specifically made for RVs, which is great and all, except that most of them are crap compared to a modern monitor.
One small problem with plugging 12VDC native computers into a solar system: it's not atypical for the solar charge controller to drive the charge voltage over 14V, at which point most internal "PSUs" designed for 12V will shutdown. You need a voltage regulator between you and the battery-terminal voltage.
Do you find that it's more power efficient to run a custom built machine like that? I would think that using a laptop as your daily driver would stretch your battery bank further, since it's designed from the ground up for battery power. I lived off solar power for a while, and that was my thinking. I never did the math to verify it though.
What I will say is that the machine I used to use in the van lost its video output (the perils of onboard graphics), and I've found it impossible to source a reasonable replacement mobo. As a result, I've switched back to using a laptop (lenovo Y700) in the van.
What I notice as a potential improvement is that it effectively lets me store an extra X hours of power for computing, because I'm adding the laptop battery to the overall storage. Compared to my "house batteries", the laptop batteries are of course very small. But getting (say) 3 hours of work time stored up in the laptop does seem like a plus, especially on cloudier days.
The trends definitely favor a mobile-first approach going forward, if simply because of the additional systems integration work that goes into tuning laptops. Battery life aside, the laws of physics dictate that you're more likely to have a reliable experience with a low-power, low-heat, low-vibration machine than a screamer desktop, and those things are definitely qualities that the manufactuers optimize for in mobile devices.
It's also increasingly common to see single-board SFF desktops that are essentially laptop parts with different I/O. I suspect that those are less heavily tuned, though.
problem for me: i need a screamer desktop. i'm not online or running JS in the browser, i'm developing and building software that used to take 9 minutes to compile from scratch on those 12VDC systems. Over and over again.
at home now i have a 16 core ryzen, which is extremely quiet and very, very much faster than those systems. but i don't think it would be possible to host this on a 12VDC PSU, and even if it was, the mobo form factor is inappropriate for the space i built in the van :(
to be fair, the lenovo Y700 was/is essentially as fast as the 12VDC systems, which is quite impressive.
> My guess is that someone figured out how to shrink the necessary converters to a size that made it more "sensible" to put them inside the monitor
I’m under the impression that this was demand/market forces. It used to be much cheaper to have an external brick rather than engineer a power supply into a monitor. However, the engineering cost was lower than the missed sales due to consumers not wanting external power supplies. It’s just another thing to lose. Another unique plug you’ll never be able to reasonably replace with what’s on hand.
"noisy" in this context referred to the output signal, not the acoustic levels.
Also, the 12VDC native monitor that I did buy didn't have a unique plug. It used the fairly standard ring/core 12VDC jack and plug found in many different contexts, though not a lot of digital ones.
12V / 6mV is ~33 dB SNR. What’s more is that the 6 mV ripple on this supply’s 12V rail is p-p and not RMS. So for most apples to apples measurements it’d be more accurate to say the SNR is closer to 35 dB.
Regardless, seasonic supplies, and this one in particular, are very low noise. Most supplies are closer to 30mV ripple on the 12V rail.
I'm just waiting for electric vehicles and their technology to invade the van and RV market.
It would be really cool to have a solar/battery powered vehicle you can power from DC. A bus-sized RV could easily generate 5kw from current solar panels just on it's roof.
Current RVs do have both 110v AC and 12v DC systems, but they're not easily integrated.
12v is too low to power bigger stuff like air conditioners (1500 watts = 125 amp wiring!) and converting DC to AC requires expensive inverters.
A really cool solution would be high-voltage DC powering DC appliances directly and stepping down to clean lower voltage DC close to where it's needed.
Electric vehicles are a long way from the sort of mileage that would allow them to effectively replace vans and RVs. We routinely drive 5-800 miles in a day in our van. This isn't going to be possible until battery replacement takes over from battery charging.
And yes, our van has 120VAC (1kW) available, but I prefer to use it only for things where's there no alternative (e.g. our toaster, or recharging laptops, sigh).
As far rooftop, we have the long Sprinter with about as much solar as you can get on the roof given that you need (at least) an exhaust fan. That gets us to 540W. The most I've ever seen on a van was 3kW and that was an ... insane ... physical setup.
I bought a tesla 700 miles away and drove it home the same day.
As to RVs and vans:
- the cybertruck ships in the next year with 500 miles range, and will charge at 250kw/1000 miles per hour
- the tesla semi is coming soon
- other companies are entering the game
- people have been putting lithium ion batteries in their RVs.
- EVs have electric air conditioners now.
Separately I will say if you're driving an RV 5-800 miles in a day (regardless of power source) that's a really hard trip. I find 300 miles in the saddle taxing and 150-200 lots nicer. Leave, drive and arrive during daylight hours and enjoy the trip.
I wouldn't want to drive 5-800 miles a day every day, and I don't. But when we do (e.g. just trying to get somewhere before staying there for a while), I don't want to be cramped in that way.
The cybertruck charging rate sounds great, but where are the charging stations? You know, the ones in southeastern Oregon or southwestern New Mexico?
I'm not trying to be a debbie-downer about this. I absolutely believe in an electrical vehicle future. But I don't believe it will happen via charging - it will happen via battery swapping, probably using a standard mandated by national (or international) level bodies.
You should be able to use a DC-DC converter for charging the laptop off the DC - they're sold as "laptop car charger" or similar and have a cigarette-lighter-style plug.
No it's got local delivery and workman written all over it. Unless you're going to go rv at the local supercenter.
Rv parks and campgrounds will limit your pull from the mains and when someone starts pulling serious juice from their campsites they'll start auditing.
I really don't think people take exactly that into account.
Right now, and even for the past few decades, electrical use tended to be offered as either a freebie with just being somewhere as a guest or renter, or was generally priced in with discount for conglomerate use. As more and more power intensive appliances become more common (I.e, shift to E.V.) the pricing model is likely to have to change.
Right now you can host 100's of people at a campground, even with an AC on say every third, or every other camper. Throw 100 electric vehicles having to charge at the same time, and you may find the bill has to pass on to users just to be viable.
I was recently wondering if 12VDC shouldn't become the new current standard for inside homes. Everything is still 230VAC (in the EU at least, I think the US uses 110V?), which is way too much for a lot of household appliances. The only reason modern LED bulbs are hot and expensive is that they need to convert down to 12V from 230V. It's mostly washing machines and kitchen appliances that need that much (and some of those even need 380V).
Almost everything in the home needs adapters and power bricks to convert it to something sensible, which usually is 12VDC. So why not just make that the standard?
Monitors were still 12V inside last time I checked (5 years ago), they're basically integrating the PSU into the monitors because they use less power and can be smaller/run cooler.
It's very likely you can just bypass the internal PSU on any monitor and power it straight from your battery.
You'll just need to take it apart, which can be hard depending on the housing.
> I built (on an ATX form factor) were 12VDC native
I want this! Mainly to run from batteries efficiently, transforming from DC to AC and then back to DC is so absurd. Can you give any photos/links/recommendations on how to start?
Sadly, I mistyped the above. I used ITX mobos, not ATX. However, you can get started by taking a look here: http://mini-box.com/ and in particular the "picoPSU" section.
My van has 455aH of storage in lead acid AGM batteries, charged by 540W of solar panels and a solar charge controller. The batteries power a 12VDC service panel; one of the circuits runs to the computer/office area, where I have a voltage regulator to keep the voltage within limits when the solar charger decides to ramp things up to high for the picoPSU.
In one sense this feels like pushing the efficiency problem under the rug. There's a requirement for power supplies to meet a certain efficiency standard, and generating 3.3v and 5v rails are pushing that number down. Solution? Move that task to the motherboard where they won't count against the power supply efficiency requirement.
Of course, from a purely engineering perspective, it really does make sense and it's overdue.
The motherboard "knows more" than the PSU though, it should be able to better deal with these idle power situations by shutting down things that aren't used anymore and powering them back up when needed. An external PSU would have tighter constraints regarding power availability and wouldn't be able to optimize the power profile quite as aggressively since they don't really know exactly what they're driving.
This is an interesting thought and a different perspective on the situation.
I was thinking something of that kind, that, instead of being able to rely on very high quality components and very high expertise for power supply design being centralized in the PSU, everyone else (motherboard / graphics card / harddrive ) manufactures must now become able to design, and willing to implement, high-efficiency, high-quality power-regulation circuitry in their own products.
This also affects the price of those components, while it probably does not make the PSU i get to buy in a webshop any cheaper.
From an engineering viewpoint, it probably does make sense, especially for the lower voltages, as less copper may be required.
Exactly. I'm pretty sure that once the DC-DC converters are mainly placed in the motherboard, they won't be audited for efficiency anymore and global efficiency will plummet.
Decreasing efficiency of the motherboard would mean it needs more power, meaning they would require larger power supplies in addition to being a new specification.
Makes sense. Should have happened long ago. All those old voltages date from when +5 from the power supply was used directly by the logic. Facebook's Open Compute rack delivers only +12 to the boards and drives, and that's from 2011. There's a power supply in the base of the rack that takes in three-phase AC or 48 VDC or whatever and puts out +12VDC. All other conversion is on the board. This is just catching up desktops to where the data center went years ago.
Ah, they did.[1] The power busbars are in the same place, so you can't have both 12V and 48V in the same rack. The connectors are intentionally incompatible so you can't plug into the wrong voltage.
> PSU vendors don’t want to release ATX12VO products for DIY builders until there are motherboards that support ATX12VO. Motherboard vendors don’t want to create products until power supply makers support them.
Seems like that could be addressed with one of two adapters:
- A passive adapter cable with a female ATX connector (with the 3.3V and 5V pins disconnected) and male ATX12VO connector
- An active adapter cable/board with a female ATX12VO connector, the necessary circuits to step voltages down to 3.3V and 5V, and a male ATX connector.
One or both of these could happen today and immediately solve that chicken-and-egg problem for the custom market, no?
> An active adapter cable/board with a female ATX12VO connector, the necessary circuits to step voltages down to 3.3V and 5V, and a male ATX connector.
I think this will definitely happen, as supplies of classic ATX PSUs start to dry up, and people need to keep powering their old machines. This situation already exists for old AT style vintage machines, being powered by ATX PSUs.
It was bound to happen eventually, there has been a long trend towards point-of-use regulation displacing multiple output supplies. I'm just surprised they went with 12V, and not 19V which has lower rectification losses, less power loss to wire/trace resistance, and a large existing design ecosystem from laptops where 19V has been the norm for years.
To me it seems like the ATX12VO specification is more of an endorsement of the direction PC power supplies seem to be moving in rather than a from-scratch redesign. It would be nice to have higher voltage (19V, 24V or perhaps higher) for a single voltage supply, but it would be a far bigger change to the overall PC ecosystem.
Why is it 19V for laptops? I had suspected it’s to compensate for voltage drop at linear regulators. Batteries are either 11.1(3.7v 4 cells) or 14.8(3.7v 3 cells 1-3 parallel) and these voltages doesn’t seem immediately consistent with ATX12V to me.
The nominal voltage of a battery isn't the maximum voltage, 3.7V lithium cells are typically charged to around 4.2V.
A 4S battery (or equivalent) will have a nominal voltage of 14.8V and a maximum voltage of 16.8V, which leaves 2.2V extra, probably to account for losses.
Why 19V ended up being the typical voltage rather than rounding it up to something like 20V or 24V is beyond me (probably a matter of compliance, try checking IEC 60950-1 or related standards).
I have seen many laptops where the charger sticker says the output voltage is 19.5V. And the nearest USB-C standard voltage is 20V, so that's probably what most newer laptops which charge through a USB-C socket use.
I don't think we'd want to put those in our laps if they were dropping that much over linear regulators... Roughly (Dropped Voltage)*(Current through regulator)=(Power Dissipated)
12V makes so much more sense to be sending around - the losses at lower voltages are rough. It would be nice to replace legacy molex connectors with a more modern standard, however.
The SATA power connector is the obvious successor. Most random doodads (e.g. internal fan controllers) have moved from using molex connectors in the past to SATA power connectors now.
The SATA power connector isn't that great at high currents, look at the numerous fires. Fan controllers have moved to that because most modern PSUs have a lot of them laying around, that's all. And the board-side connector is literally a PCB edge with gold fingers, so it's cheaper.
The Mini-Fit Jr connectors seem fit for purpose, but I don't like how stiffly they stick even after releasing the latch. And they're sort of tall, depending on what you're doing.
I'm a fan of Micro-Fit 3.0, and the single-row variants are available in enough positions that you could have a low-profile connector delivering plenty of reliable power. I think the dual-row would be more appropriate for most applications though. They're cheap enough, though the tooling isn't.
Huh. That's pretty cool. I've only seen (and used) the Molex to whatever-that-connector's-called adapters, which is indeed what I used in my Threadripper build about a year ago.
Idk if it's MOLEX or whoever is knocking off their designs but the plastics they use are very brittle. I've had the clip on a number of modular cables snap off while inserting it into the motherboard and GPU connectors.
I also think the design is large and dated, and I think the modular nature of the connectors is unnecessary. But I don't know how small a wire gauge you can safely use for the current delivery on some of these 500-1k W PSUs.
And some recent examples are a terrible fit, I remember them aligning much smoother. A clear sign that manufacturers consider them second class. And nobody will complain, because it just strengthens the pre-existing assumption that the new plugs are better.
Industrial design demands this. A replacement connector should be difficult to remove and use sufficiently thick cables. Because of the requirements there is little room for improvement and thus we have used the same connectors for 30 years.
The molex 2.36mm pins can apparently carry 8.5A, which at 12V gives you about 100W. That’s useful for some power hungry components, but I wish there were a smaller alternate 4-pin power cable standard that I could use to route power around to things like fans and SATA SSDs.
Something in between the giant molex connectors and a 3-pin fan connector.
ATX power uses 18 AWG copper cable. 18 AWG copper cable has 6.4 ohms of resistance per thousand feet. A 6 foot 18 AWG cable would have .0384 ohms of resistance. Let’s say you has a pretty heavy load of 10A. This would be a 0.33 ohm load. There would be about 10% loss, or 3W.
Not nothing, though at more typical loads the cable loss would be lower. 5V rails are more typically used and they would have close to half the loss.
I wonder how easy this would make server local UPS systems. A battery with a single buck/boost converter should be enough as a power supply. Just need to add a mains fed charge circuit.
I actually leave my monitor off UPS intentionally. While I'm away, I'd rather my workstation survive a brownout or short loss of power and if I'm actively working, it's easy enough to move the monitor's power cable to the UPS to save my work and shut down. My CPU draws ~45W at idle; my monitor draws a whopping 130W — having it on battery materially shortens the off-AC runtime.
Older high-end monitors with CCFL backlights could easily draw 130W. Dell's 30-inch model from 2008 was rated for typical draw of 163W and a max of 250W (when also powering speakers and USB devices).
Being able to power any external device (like a hard drive) or to be able to cleanly exit from whatever you're working on if you happen to be present (with a monitor) are both pursuant to these goals.
The average PC or workstation isn't configured for a clean exit just because the UPS said it's time.
Bridging momentary outages is also useful where that's common, disregarding data loss.
It's an interesting development. Things like -12V made sense on the 80s (sigh), while high-powered 3.3v/5v needs made sense on the 90s (it still makes some sense today, for example for USB)
Though one thing they might have considered but maybe rejected was going for a higher voltage single rail power supply (something like 24v)
Your board might still need those voltages but will have to DC-DC convert it themselves (as opposed to a multi-tap transformer as the article was implying)
A lot of peripheral IC's don't have reset lines. Sometimes this is a big big problem. You can end up with failure to come up maybe 1 in 10,000. Exactly the sort of thing that shows up late in development or worse in the field.
Tip: If anyone tells you power supply and reset sequencing is 'easy' ignore them.
Holding chips in reset is not the only thing that matters. Complex chips may have power-sequencing restrictions; for example 1.8V power rail might need to be up before 3.3V, otherwise the chip gets damaged.
This makes sense for specific use case computers, specialty ones, etc. So it'll be adopted. And because of economies of scale it'll push into actual desktop computers and their mobos. Having to dissipate all that heat on/in the mobo will be terrible for actual desktop computers. But it'll happen anyway because it's better for corporations and the desktop market doesn't drive their decisions.
A typical CPU (or SSD even) uses a voltage of 0.9V currently, surely it doesn't matter whether the motherboard is converting 12V to 0.9V or 5V to 0.9V.
Back when this stuff was originally designed 12V, 5V and 3.3V was all you needed (actually with the AT standard it was just 12V for motors and 5V for logic, that was pre-CMOS). Now every component requires a different voltage (if there's any standard it might be 1.8V or 1.2V), and to get several hundreds watts of power at 0.9V would require cables about an inch thick.
12V is the defacto standard for powering pretty much anything these days. Converting 12V into whatever you want is utterly trivial and incredibly efficient.
SSDs all have their own power management chips to manage their onboard voltage regulators. The flash memory itself is usually operating with 1.2V and/or 1.8V supply, and the SSD controllers are also in the neighborhood of 1V. Approximately nothing inside a modern PC uses 3.3V or 5V directly. Feeding 12V directly to the SSD won't make its power management circuits any more complicated or expensive, but it does get rid of the brown-out problem M.2 (3.3V) has with high-power drives.
No, I mean if you look at 2.5" SSDs with datasheets specifying the operating voltages, they will commonly specify 5 volts.
Intel SSD 530? "5.0V SATA Supply Rail" [1]
Seagate 600 SSD? "+5V Max" [2]
Toshiba SG5? "Supply Voltage 5.0 V ±5%" [3]
Swissbit X-60? "5V± 10% (3.3 V available upon request)" [4]
Apacer SV250-25? "5.0V ±10%" [5]
So the statement that "12V is the defacto standard for powering pretty much anything these days" isn't accurate in the case of 2.5" SSDs. I acknowledge desktop computers often supply 12v to the motherboard and graphics card, of course!
(Some 2.5 inch SSDs, like the Micron 5100 Series use both 5v and 12v [6] - and I agree that M.2 is 3.3v only)
And the VRMs and associated circuitry are the most likely thing to die on the motherboard. The PSU, at least in my experience, is the most likely thing to die in a desktop computer. So now you have all the most likely to die components on the most expensive part of your desktop system.
Being able to replace the PSU without the cost of the mobo is great for human people. For corporate people and their economies of scale it makes less sense than just making everything 1 thing and throwing it out when it breaks.
Only as anecdata, in my practical experience over the years, I needed to change many power supplies[1], and very few motherboards. (excluding replacements due to upgrades)
I would go a little bit further and say that the PSU is probably the most replaced "standard" part of a PC, followed by hard disks.
I also a couple times repaired a PSU replacing this or that failed component, but it is simply not worth the time and money, as spare new PSU's are cheap enough, the replacing is very easy whilst even procuring a single chip or capacitor is complex for non-professionals, besides the soldering tools, etc.
Hopefully the parts that age or however tend to fail will remain in the (easily replaceable) PSU, otherwise we will see an increase in replaced motherboards.
Some sort of (standard) "daughterboard" with the voltage regulating components would have made more sense (to me), but surely it makes less sense from an industrial manudacturing point of view.
[1] be it due to capacitors, power transistors or mosfets or whatever, and due to whatever reasons, be it aging, power surges, etc.
If you mean that "standard" OEM PSU's (think Asus, Fujitsu, IBM/Lenovo and a few - mini-ITX Shuttle cases) are "bottom of the barrel", yes.
Then, since often an used, original, non-standard (say) IBM/Lenovo PSU can be found for a mere 100-150 € on e-bay, I started getting el-cheapo (though not el-cheapest) ones with no appreciable decrease (or increase) in durability, putting them externally and re-mapping the cables.
For the mini-ITX cases (that use a not-very-common form factor) I gutted new 1U server PSU's and managed to fit them in.
The PC I am writing this with (ASUS) is 2008 and I replaced a PSU in 2012 and one in 2016 or 2017, cannot remember.
Out of the four IBM/Lenovo's I have at another office, dated 2010 or 2011, two have still their original PSU, two have the external adapted ATX, no other parts replaced.
All in all (and for whatever reasons) I would estimate that over a 5-10 years lifetime of a system[1] there are:
1-2 replacement of PSU's
1 replacement of hard disk(s)
No replacement of other parts, let alone the motherboard.
Sice 2012 I am at the third Fujitsu ThinClient (used as router) with PSU failing (in this case they are so cheap second hand that I change the whole little machine).
[1] the mini-ITX's (there were three of them) before being decommissioned) lasted 2003-2018, i.e. 15 years, running NT 4.0 or 2K 24/7 each with 2 PSU replacements, the last one, being a server grade PSU most probably would be just fine for another five or more years.
That is an insane rate to be going through PSU's. Many of the enthusiast PSU's myself and friends have put into our computers usually outlive the other components. Generic grey box PSU's off ebay usually dont have the same standards as a SeaSonic or Thermaltake unit
I remember the days when Microsoft and Intel would host the "PCxx" spec (where xx was the last two digits of the year) and tell everyone what all the standards for desktop machines were going to be for the next few years. Those days are gone though.
12V rail PSUs are nice for a number of reasons but I always liked that you could feed your DC input from the AC side or a battery and have a "free" UPS at the same time. It is a bit tricky with a switching supply since you don't want to have the battery seeing 25Khz power outages as the switcher turns on and off but its very doable. With a little bit of instrumentation you could have the PC itself control charging and monitoring the battery itself so only a minimal amount of hardware needed to give every motherboard its own UPS.
I am very much interested in this, using DC power sources is the future. How do I go around adding a 12V DC power supply to the ordinary ATX computer? Can you find some links for this?
Such supplies exist for some levels of power. Very popular in the MiniITX community where power levels are < 300W.[1]. There is also a wealth of material on developing DC to DC converters and a number of modules for the same.
Such a solution would not be compact and integrated like one that was designed from scratch to be used in this way, but would certainly work.
I used one of the MiniITX supplies to power a system that is mounted in a box on a tree with a solar panel charging a deep cycle battery. It is the download/storage point for a number of wildlife cameras nearby.
Thanks, this is useful. I am thinking about building a UPS at home from batteries, so it can supply around 650W of DC power. I assume that using voltage of 12V on the supply side of the DC-DC converter would be too energy-inefficient for such high a power, so I'm thinking I'd better string such 12V batteries in series to get higher voltage, say 300 V. This should have minimal transmission losses, so I can lead this to my DC-DC converter via standard electric wires.
Most of the mid-range UPS systems that are rated at 650KVA or above will use dual 12V batteries for 24V driving an inverter. You can get 1kW inverters for RV's but they need thick (0 gage) wires connected to the battery.
For "universal" battery power (and lots of gear in the market already) consider 48V power. That has been the standard at phone companies for decades so there is a lot of support for it. I once bid on a 10kW 48V inverter on eBay that had been removed from service at a phone company. It made (240V two phase power for telephone racks).
This will make motherboards more expensive and fragile. Desktop chipsets from Intel can only support two generations of processors. While a good PSU can have ten years of warranty.
Why 12v though? What requires that much voltage? The chips are running at a few volts. Spinning drives are gone, is it just for fans? Seems silly to provide 12v then shift it down again.
While I don't know the exact component that needs 12V, you have to keep in mind that your typical computer draws about 300W under a small load. 300W/12V means 25 amps. Reduce the voltage without lowering the power consumption and you need bigger wires to handle the increase in amps.
So I guess this makes it cheaper to make lower-idling power supplies; or does a high quality current ATX PSU, like one of the Seasonic Titanium models, not meet the CEC standards for idle power?
The big advantage could be finally moving the PSU out of the case, and having it as just an external power brick. This will make systems significantly lighter and smaller (and thus more portable), take heat out of the case, and improve airflow.
You can already buy 330w laptop chargers - just beef them up and you could easily run a mid-grade gaming PC.
2 pins would be plenty. 12v Power, Ground, and an overlaid capacitively coupled 2 way data comms running at a few hundred khz to send and receive status messages, fan speeds, model/serial numbers, firmware/ temperatures, on/off commands, etc.
1 pin is called PS_ON, this shorting to ground turns it on. 1 called PWR_OK which says everything is working in some way I don't want to research. 1 +12V for stand by. 1 +12V sense for detecting voltage drop or something. And then 2 normal +12V and 3 ground. There is 1 reserved pin as well (rectangle connector so needs even pin count)
> +12V sense for detecting voltage drop or something
This is actually the best bit, incredibly smart idea. It's a kelvin connection to ensure that you get 12V at the motherboard, regardless of power dissipation in the wire.
FWIW, the sense line is for 3.3V, not 12V. The 12V bus will see less drop to begin with, and nobody really cares too much about the voltage tolerance on that one. 3.3V is much more important.
Actually the limiting factor is the amperage rating of the molex terminals ("pins") that make the physical contact. I have expertise in this area back from the days of Bitcoin GPU mining where I built custom cables that I crimped myself after buying the molex housings, molex terminals, and wire.
Each individual terminal, formally called a Molex mini-fit junior terminal, is rated either 7.0A (typical for those with 2 contact springs) or 9.0A or 13.0A (for those with 4 contact springs.)
As you can imagine, most cheap PSUs and wire harnesses are made of terminals with 2 contact springs, so the three 12V wires on the ATX12VO spec allow a max of 3×7.0A = 21.0A or 252 watts.
As to voltage drop caused by resistance in the wire, it will be sufficiently small with 16AWG wire which will easily carry 7A over the short distances in a PC case.
And it looks like 2mm and rated for 15A, and 1mm for 10A.
(I thought DC current used the volume of the wire and AC the surface area, so I'm a bit surprised that wire ratings don't distinguish AC and DC. Though I suppose heat dissipation is determined by the surface area so perhaps that makes sense.)
to deliver 1500 watts at 12v through one pin would be 125 amps. 5 pins each with 25A I would think you’d want at least 14 gauge per pin and even then I think they would be rather warm.
If you’re sending 1500 watts directly into a motherboard, something is probably wrong. Massive power draws (CPU, GPU) would always get their own direct connection, just as they do today.
Apply this simple rule of thumb: watts become heat. The only major exception to this is where the power has exited the computer via USB or similar.
I haven't assembled a PC since the pentium 200 days. I recall hard drives, floppy drives, and the motherboard had their own power connections but not the CPU or GPU of the time.
Still a single pin is probably something like a #18 gauge and would only be good for 10A before it gets too warm for comfort.
> I haven't assembled a PC since the pentium 200 days [...] had their own power connections but not the CPU or GPU of the time
That hasn't been true for a long time. Nearly every motherboard for many years has a separate "ATX12V" plug dedicated exclusively to the CPU voltage regulator input. According to Wikipedia (https://en.wikipedia.org/wiki/ATX12V), the first CPU which needed it was the Pentium 4 from the early 2000s.
As for the GPU, the PCI Express slot can only go up to 66W in the 12V rail (https://en.wikipedia.org/wiki/PCI_Express#Power), and that's a software-enabled special case for GPUs (the normal limit being 25W for long slots and 6W for the short x1 slots), so higher-power GPUs need one or even two separate power plugs.
I always thought that was why the big ATX connector has multiple 12V wires, multiple GND wires, etc. To push heaps of current, but using a normal gauge of wire, rather than something huge.
Not all your power requirements will go directly to the motherboard though in that single connector. For example the GPU will have its own dedicated power plug(s). I would expect the motherboard to need < 500w for most systems.
Go back to step 1: don't design it utterly broken?
I suspect the real reason to use many smaller wires instead of two big ones is cable flexibility. There is no electronic problem with two big ones, unless you repeatedly shoot yourself in the foot. Don't do that.
First, this already happened to some extent. Outside of drives and fans, the virtually all of the power used in your system is exclusively 12v; either from the 12v pins on your ATX24 or P4 or EPS12 connectors, or via the PCI-E connectors; very little non-drive or non-fan power is supplied via 3.3 or 5v rails.
Second, almost all PSUs today use 12v DC to 3.3/5v DC converters to increase efficiency given how little 3.3v and 5v is required. The PSU itself is often designed internally as a high efficiency multi-module single rail 12v-only design.
Third, Intel is not in a position to dictate anything. Intel has tried to float many changes in the standards (such as the WTX board standard (with matching incompatible mobo plug), and the BTX board standard (using ATX plugs)), and all failed. The only thing that stuck was using 4 and 8 pin PC Minifit Jr plugs (the same ones ATX, PCI-E and countless other plugs come from); and, annoyingly, somehow not compatible with PCI-E's pinout.
Fourth, Intel is also not the dominant player in the field now, AMD is. They failed to get major changes through as the dominant player, what makes them think anyone cares to hear what they have to say today? It is unlikely AMD will ever ship machines that don't ship with normal ATX unless they just jettison the ATX24 altogether.
Side note: 12v capacity: ATX24 144w, P4 192w, EPS12 336w, PCI-E 6 75w, PCI-E 8 150w, and the PCI-E slot itself 75w.
A modern desktop could theoretically be ran off a dual EPS12 and a PCI-E 8+8 for the GPU; modern mobo designs run the CPU and RAM VRMs off the EPS12 entirely, modern GPU designs run the GPU VRMs off of PCI-E plugs and the VRAM VRMs off the PCI-E slot power, very little power of any voltage is supplied to the system via ATX24.
Another side note: Any board that has a legitimately clean 5v rail for USB is not feeding it off the PSU directly. Any board that has absolutely garbage USB power probably committed this sin.