Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The performance that SolidJS eeks out of the DOM is really next level.

Kind of a weird way of putting it. Intuitively any framework abstracting concepts on top of DOM manipulation has to be slower than direct DOM manipulation.

But yes in comparison to other frameworks, the benchmarks they make do look impressive.

Now I'm curious to do some benchmarking of my own.



That's kind of why the entire virtual DOM concept came about, because it was faster than direct DOM manipulation. Essentially batched updates to the DOM were faster than ad-hoc updates.

Now React is 8 years old and browsers have improved a lot since then so I imagine the gains might not be what they used to be. But at the time it was huge.


Virtual DOM came about since it offered a simplistic top down view = fn(state) model without terrible performance. Other top down renderers were terribly inefficient and this built on that. It was never innately faster than targeted direct DOM manipulation. It was just compared to other approaches that were innately built on diffing as well. And things like reading from the DOM can cause reflows and other terrible performance bottlenecks.

Fine-grained reactivity existed back then and was more performant for updates. Always was. Just had its own issues since pre-MobX we didn't see implementations in JavaScript which provide glitchfree execution guarantees. So Virtual DOM was a great invention but I think it was misrepresented early on. That's what got me to start working on Solid. I knew the performance was there without a VDOM from day one. I'd seen it. So when Knockout started waning in popularity 2015/2016 I started working on a replacement.


That perception of the DOM/VDOM situation is and always was false. React was always slower at runtime than a carefully-engineered system designed for performance.

As Dan Abramov said a couple of years later <https://medium.com/@dan_abramov/youre-missing-the-point-of-r...>, people were missing the point of React: it was never about VDOM; rather, that was a cost that at the time they reckoned had to be paid in order to write reliable code in an immediate mode style, because if you tried doing that without DOM reconciliation the result would be atrociously bad. VDOM came about because the alternative (the consistently faster alternative, I may add) entailed things like explicit DOM mutation that was far too easy to make mistakes with, and reactive data flow was generally even buggier. It was a carefully-chosen trade-off: shedding some performance, for greater robustness and ease of use.

“DOM is slow, VDOM is fast” was a straw man comparison that entered the public perception but which the React team mostly stayed well clear of: almost no serious systems have ever used the DOM directly in the immediate mode style, because it has obvious and serious problems in both performance and transient UI state like scroll and caret positions and element focus (… and transient state things are problems for all immediate mode interfaces, not just DOM ones: escape hatches are fundamentally required).

Was VDOM worth the cost at the time, compared with the other options then available? For most people, probably. And even for the rest, React presented useful food for thought that led to other options improving too. Is VDOM worth the cost now? Well, I’m with Rich Harris that VDOM is pure overhead <https://svelte.dev/blog/virtual-dom-is-pure-overhead> and that we have more efficient ways of doing things now.


Wasn’t a separate big motivation that reading from the real DOM (in order to generate a diff with the new intended DOM) is also slow?


The DOM used to be slow, incredibly slow, but that was a very long time ago when JavaScript only executed as an interpreted language. The DOM has been insanely fast even since before React was born. Using micro-benchmarks you can see that DOM access, when not using query selectors, tops out at around 45 million ops/s in Chrome and between 700 million to 4 or 5 billion ops/s in Firefox depending upon your CPU and ram. That is fast. No higher level framework will improve upon that.

Back in the day when the DOM was slow the primary performance limitation was accessing everything through a single bottleneck, the document object. To solve for this the concept of document fragments was invented. These aren't used anymore because the DOM is insanely fast and modern implementations (popular frameworks) are so incredibly slow. You aren't going to achieve a technology solution to a people problem.

The first big misconception of DOM performance is the difference between DOM interaction and visual rendering. Visual rendering is fast now because its offloaded to the GPU but its still far slower than accessing and modifying the DOM. As an example set an element to display:none and then perform what ever DOM modifications you want to it. Those changes have no visual rendering, are still DOM manipulation, and are insanely fast. You can measure this with a microbenchmark tool.

The second big misconception of DOM performance is how to access the DOM. The fastest means of access are the old static DOM methods, like: getElementById and getElementsByClassName. Query selectors will always impose a huge performance penalty when there are standard methods to do the same job and a minor performance boost when there aren't. The querySelectorAll method compounds that performance penalty. The performance penalty is present due to string parsing of the selector as necessary to convert that into something vaguely equivalent to the static methods, which is a step on each operation the static methods do not require. The minor performance to access things, such as by attribute, is achieved because there isn't a single static method equivalent and more steps must be taken compared to the parsed string result of the selector, but that performance boost is exceedingly minor (16x at most).

Usually developers prefer slower means of access to the DOM due to preferential bias to declarative approaches to programming. There isn't a performance tool to fix developer bias.

If you want both performance and less intimidating approaches to DOM access you can create aliases that solves for code reuse with more friendly names, but you will still need to understand the concept of a tree model.


Fair enough, I don't retract the ethos of this statement, but really it should: as far as abstractions go, SolidJS is a very performant framework, arguably more so than any other framework out there right now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: