Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So many people talk Erlang / Elixir insane concurrency … but I’m struggling to reconcile the talk with what I find in stress testing and benchmarks results.

Can you comment on why so many benchmarks show Erlang / Elixir performing significantly worse than other languages like Go.

https://stressgrid.com/blog/webserver_benchmark/

https://stressgrid.com/blog/cowboy_performance/

https://www.techempower.com/benchmarks/

EDIT: why the downvotes? I’d rather you post a comment to me so we can have a health dialogue.



Just woke up so multiple things.

First of all some context. The BEAM community does not engage heavily with these benchmarks compared to a lot of others communities. That means that this code is definitely not optimised. On top of that, benchmarking makes hard to do what the BEAM community love above all else, which is to put the thingy in prod and then tweak the performance based on what you discover there. The reason to do it so much is that the BEAM come equipped for that.

Second point is that as pointed down there, the BEAM optimise heavily for latency and graceful degradation. One important point is that if you look at the std deviation on latency and the latency itself here. They are super low. What this mean is that the BEAM system will still be highly reactive under load. And interactive. But also that we are nowhere near the max load they can deal in this benchmarks!

If you wanted to compare you would need to overload the machine so that this latency climb. A latency so low mean that the machine is not overloaded. It may be at 100% CPU but it is totally coping ok. The BEAM is basically exploiting the machine to its limits better.

The reasons are multiple but basically you should consider that these benchmark for the BEAM are not sweating it. And that means that this load is just a normal day for a BEAM setup, not one that would make you auto scale.

In that light, it is a far different picture, no? Happy to discuss


This comment lower point to this too in a slightly different way that i deeply agree with https://news.ycombinator.com/item?id=27683999


Someone else alluded to this, but Erlang as a runtime prioritizes -latency-. Most languages prioritize -throughput-. These are fundamentally at odds with each other (oversimplifed, but - to maximize throughput you want things queuing up; to maximize latency you don't). This also isn't to say different versions of libraries and things don't muddy it some too, but look at the latency tail.

From those techempower benchmarks, look at elixir-plug-ecto. 2.8 ms max latency, with an average of 1.0ms latency. That average latency is very respectable (but not the lowest), but that max? That max IS the lowest.

What do you want from a web server? Something super fast for 99% of requests, and then takes orders of magnitude longer for the worst case, or something that is somewhat slower on average, but stays predictable?

And as someone still else alluded to, those have to do with the runtime characteristics. How easy is it to write reasonably performant, correct, concurrent code? Erlang (and Elixir) makes it very easy; I would argue easier than any other language.


The Erlang ecosystem is centered around creating tools for dealing with concurrency. Concurrent programming concerns itself with trying to represent a certain set of semantics in which a program must deal with multiple requests that overlap in time. A concurrent program could ultimately be executed on a single logical processor via some form of time sharing. It just has to be able to deal with multiple requests which can overlap in time rather than neatly coming one after the other. derefr's comment that is a sibling to yours is a good example of pointing out Erlang's strengths when it comes to expressing concurrent semantics (notice nowhere in derefr's response is there any talk of performance).

However, Erlang, as you have correctly pointed out, is not very good if you only care about parallelism. Parallelism is not concerned with semantics, but performance; it concerns itself with trying to make a given computation faster by using multiple physical hardware resources but maintaining the same semantics as a hypothetical non-parallel version.

Your argument is a cogent one for why Erlang is not great if you only care about parallelism. Being able to scale a program to use 20 cores is no good if the same program written in another language could beat the pants off that program using just one core. However, that is an independent concern from concurrency.


Would you say then that Erlang has low latency and low standard deviation of how fast it will respond to any request under any amount of load.

It might not have the highest transactions per second, but for those transactions it completes - it’ll do it fast and with the same low latency.

There’s other languages that can perform more transaction per second but as load increases, their latency and standard deviations grows exponentially.

(This is seen in the stressgrid benchmarks).


I'm not the best person to ask about Erlang's performance characteristics. I know them in broad strokes from light reading about the BEAM VM but I've never written any Erlang that has made it out to production, only for the smallest of hobby projects. So I don't know, e.g. what the P99 latency of a given Erlang program might be.

That being said, my point was that concurrency is an independent topic from runtime performance (including latency), which is the domain of parallelism. Erlang's big selling point is it makes it possible to write concurrent programs whose functionality (not performance) would require oodles and oodles of code and discipline in some other languages.

EDIT: On reflection I do think if what you're talking is catastrophic latency overruns caused by cascading queuing failures then Erlang can indeed help make it easier to avoid those pitfalls.


> Would you say then that Erlang has low latency and low standard deviation of how fast it will respond to any request under any amount of load.

Yes, that's exactly correct. Erlang/Elixir server apps aren't the fastest around but their latency is very predictable and they remain responsive under load, unlike programs written is most other languages.

That's the main selling point of the BEAM VM.


I'm curious how weird that's going to get with the JIT, as the variability in processing time becomes more variable.


It's likely there will be some more wildly varying latency figures until all hot paths are JIT-ted, kind of like it is with Java.


All code is compiled to its final machine-code form at start-up, so there's no warm-up. It's closer to an AOT compiler than most JITs.


Thanks for the correction, I was operating under a false assumption.

Good to know!


> Your argument is a cogent one for why Erlang is not great if you only care about parallelism. Being able to scale a program to use 20 cores is no good if the same program written in another language could beat the pants off that program using just one core. However, that is an independent concern from concurrency.

I think this sells it short. Serious software has to cross machine boundaries at some point. There are pieces of code and styles of writing them that work really well on one core. They may also do okay on one machine. But once you have to go to two machines they become a huge burden. A liability.

Part of the way people like IBM used to make tons of money was off of people who had software that could not cross the one machine threshold. They sold really, really big machines. They sold unobtanium that let you continue to scale your one machine vertically. Like $20,000 hard drives (1990's dollars) that were essentially battery backed RAM, meant for things like putting your WAL on to speed up transaction commits per second.

Speed doesn't mean much if it's taking you in the wrong direction.


the stressgrid benchmarks look bad because they forgot to turn off the "spin your cpus" setting. That optimization is really important for some lower-performing platforms of yesteryear (which the erlang VM does need to continue supporting) but basically irrelevant for modern platforms, and actually bad, if you're getting charged CPU credits. However, it's a command line switch away. I don't know if the default has been changed in more recent BEAM versions (I feel like they did, around the time they were changing defaults to favor cloud deploys, like setting up concurrency to detect cgroup vcores instead of machine vcores)

The stressgrid folks did a RCA and posted the explanation in a subsequent, but most people rushing to shoot down erlang/elixir don't get around to reading that article.

Extracted comments from the article:

> we discovered that Elixir had much higher CPU usage than Go, and yet its responsiveness remained excellent. Some of our readers suggested that busy waiting may be responsible for this behavior.

> c5.9xlarge and c5.4xlarge instances show similar results in responsiveness, and no meaningful difference with respect to busy wait settings.

https://stressgrid.com/blog/beam_cpu_usage/


What is the "spin your cpus" setting exactly you're referring too? Would be good to share for other to know.

Also, note that your link shared from stressgrid was published BEFORE the benchmarks links I shared from stressgrid. As such, wouldn't those benchmarks take into account your explanation?


busy wait spins your cpus.

I don't know for sure, but the benchmarks you linked do some more RCA, which involves looking into why cowboy2 is not great (tl;dr HTTP2 support is hard)

also, here is how stressgrid can POC (possibly too dangerous for prod?) to 100k cps (which is on par with what Go can do in the article you linked), you could recompile kernel and configure ranch differently, would probably need to be proven out more:

https://stressgrid.com/blog/100k_cps_with_elixir/


Your comment:

> "the stressgrid benchmarks look bad because they forgot to turn off the "spin your cpus" setting"

stressgrid blog post:

> "When running HTTP workloads with Cowboy on dedicated hardware, it would make sense to leave the default—busy waiting enabled—in place. When running BEAM on an OS kernel shared with other software, it makes sense to turn off busy waiting, to avoid stealing time from non-BEAM processes."

stressgrid seems to recommend the opposite of what you mention, taken from the blog post you shared [0].

Am I misunderstanding?

[0] https://stressgrid.com/blog/beam_cpu_usage/


You are. I'm saying turn it off unless you're on legacy hardware that is known to need it (most people are in the cloud). Unless it's already off by default (I don't know off the top of my head)

The perf looks bad because they are also looking at CPU usage.

Getting to 100k rps is unrelated to busy wait.


I feel like you're making an orthogonal statement related to cpu usage and the wait/spin settings for "cloud hardware".

CPU usage has no baring on transactions per second. So why are you bringing it up.

Spin/wait settings: how is "cloud hardware" different than "legacy hardware"? Both are servers located in a data center. Furthermore, the blog post goes to say that if you aren't running any other non-erlang services (you box is dedicated to Erlang) that you should KEEP the default setting of spin/wait since it won't cannobalize other services running on that box ... which is exactly how these benchmarks were tested.

The fact of the matter is, Erlang had lower transactions per second than most of the languages benchmarked. My insight from this HN discussion is that where Erlang shines is having consistently low latency/response time. It's not the fastest, but it's the most consistent even under load.


That's still really understating erlang's advantages though. Erlang is a lot easier to write working, custom concurrent and distributed systems in because of the design of its primitives and runtime. Erlang has a bunch of different advantes all related to controlling disparate pieces of software under one roof, so it's not easy to summarize why I love working with it in one comment.

In general, I think blog posts like this do a pretty bad job of explaining it. At some point I'll publish my take and hopefully it'll reach the front page here.


Elixir does not a fantastic computational story. That's why it has NIF's to bring in things like C or Rust to deal with the math stuff.

Languages like Go, C, Rust will always beat Elixir/Erlang in the computationally intensive benchmarks.

You would chose Elixir/Phoenix/Erlang for the concurrency and networking story.


But note, all of the benchmarks I posted in my parent post(especially the first two) are concurrency workloads - not numeric. And Erlang still performed noticeably worse than other languages.


> Erlang still performed noticeably worse than other languages

I think you need to define worse here...unpredictable spikes in latency will give you plenty of headaches when trying to guess how much hardware you should throw at a service. Erlang's consistent latency here is what I would choose above everything that benchmark shows for almost every problem I've ever solved.

Going fast at all costs is not a desirable trait for my software and I suspect it isn't for most peoples software. I want predictable behavior that operates gracefully under extreme circumstances.


In the 200Xs, the BEAM VM had a clear performance advantage in dealing with high-concurrency network loads. It has never had a raw performance advantage in terms of the bytecode that it implemented, that is, the Erlang/Elixir layer (it was generally faster than Python/Ruby, but that wasn't saying much, especially back then, but it had clear performance disadvantages vs. C/C++/Java), but it had a superior internal runtime that could make up for that in performance benchmarks, as long as you didn't try to run too much BEAM bytecode. Much like how NumPy is very fast, as long as you don't try to run too much pure Python with it.

However, since BEAM doesn't have access to unique CPU instructions that nobody else has or anything else, and since a lot of focus across a lot of languages has been put on that problem, that particular advantage has waned, and Erlang to my eyes has indeed been outright passed on this front by multiple languages. In the 200Xs, I did not see people talking much about NIFs as a solution for performance; that talk has started as an effort to keep up with things like Go and other languages that have taken advantage of BEAM's lessons and explorations of the space.

Personally, while I think a lot of the hoopla surrounding Erlang/Elixir isn't wrong per se, I do think a lot of it is outdated. They'll say "We do X and nobody else does!" but while that may have been true 10-15 years ago, it isn't anymore. There's no performance reason to pick Erlang/Elixir over Go, for instance, and if you take the models of memory access back to Go there isn't a huge organizational reason either. What Erlang/Elixir force you to do, you can voluntarily do in other languages too. And I think that's become true across a lot of other of the putative "advantages"; it isn't that Erlang/Elixir aren't nice in some ways, but I do wonder how much of the recent push is stemming from people who are experiencing some of these capabilities for the first time and trusting the Erlang/Elixir storyline that they're unique, when in fact they are increasingly just table stakes for a new language nowadays rather than special characteristics unique to the BEAM family of languages.


> What Erlang/Elixir force you to do, you can voluntarily do in other languages too.

This argument is older than dirt. I don't need Java, I can do all of this stuff in C (voluntarily). I don't need Y, I can do it in C++ voluntarily.

You don't control your team. It doesn't matter what you are willing to volunteer if it doesn't work unless the whole team does it. If it only works if everyone has to do it, that's not voluntary now, is it?

Every boundary that is by agreement only will constantly be pushed and pushed. Every time someone wants to leave early, or there's a production issue or a customer deadline, or they just don't want to. It's why size and speed are a constant fight at some places. Everything you fix is counteracted by ten other people who just don't care.

Either the system has to enforce the rule or your coworkers become enforcers. That's a shit job to begin with, and doubly so for introverts, who will either do too little or too much in the face of boundary testers.


'What Erlang/Elixir force you to do, you can voluntarily do in other languages too'.

Like immutability! :P

Jokes aside, I agree for basic concurrency you can get pretty far with other modern languages. I think it's what brings people's attention to Erlang/Elixir, but I don't think it's the most important differentiator. It also isn't the one that Erlang's community (I can't speak to Elixir) really touts, except as one that is easily understood by those outside of it.

The real benefit is fault tolerance. Everything about Erlang, the concurrency and distribution stories included, is built around fault tolerance. You need concurrency and distribution to be fault tolerant (can't have one bad process choking out others; can't have one bad machine taking the service down, etc). The immutability, the supervision tree, those also are about fault tolerance.

I've written production systems in Go. It scaled better with way less tweaking than the JVM based stuff we'd written previously required. But it wasn't nearly as resilient to failure, or as predictable, as the Erlang stuff I've run in prod was.


The funny thing is that precisely what got me into Go was replacing an Erlang system that was constantly falling over despite quite considerable efforts with a Go system that ran on a fraction of the resources, ran much more quickly, and by comparison was rock-solid. I just ported the essence of supervision trees in to Go and was off to the races.

This is part of what I mean; Erlang doesn't have access to special CPU instructions that make supervision trees possible. They're just code. There's no reason you can write them in other languages. It's not even particularly hard, unless you insist on exactly matching all the accidental details of the way Erlang implements them instead of implementing their essential details in a manner idiomatic to the base language.


"It's not even particularly hard"

Define hard here? Because there is a lot of bookkeeping involved, and, yes, to get some of the effects you have in Erlang that are necessary for reliability, you'd basically have to create your own runtime atop Go. I.e., yes, if you just want to "if this process fails, restart it", you can do that trivially in another language, but "if this process fails -kill these others and restart them from a known good state-" is devilishly hard, given that Go has no way to kill a running goroutine. You can send a message along a channel of "hey, you should stop", but if code execution in that goroutine never gets there, you have no guarantees.

And while the CPU doesn't have special instructions, the VM -does-. Exit signals are guaranteed in the language spec; a colocated supervisor is guaranteed to be able to both detect a failed process, and to be able to kill others. Go offers no such tooling, let alone such guarantees. I'd be quite interested if you say you did that; I suspect it was, as mentioned, just "hey, if an error comes out of this response channel, restart the goroutine". Possibly also a "here's a channel we can send 'kill' commands on, downstream goroutines should check it occasionally to see if they should terminate". A lot of bookkeeping, no guarantees.


I think you're making a mistake a lot of Erlang/BEAM/etc. (let me call it just Erlang after this) advocates make, which is to conflate Erlang's solutions to problems with the only solutions to problems. Almost no software is written in a language that has the ability to actively kill running threads externally. This is not a catastrophic problem that causes the rest of us to routinely break down in tears; it is a thing that occasionally causes bugs, merely one on a long list of such things. On the scale of problems I have, this isn't even in my top 50. When systems are written with that understanding, it's only a minor roadbump. So the fact that I don't have Erlang's exact solution to that problem isn't even remotely worth me switching (back) to Erlang for.

Is it a problem? Yes, absolutely. Is it a problem worth spending extremely valuable language design budget on? Heck no, and the fact that Erlang does is a negative to me, because what they gave up to get that capability is way more important to me.

Does my solution exactly match Erlang? No. Of course not. But it gets me 90% of what I care about for 10% of the effort, and in the meantime I get the other things that directly impact my job on a minute-by-minute basis, like an even halfway decent type system (it's not like Go's is some sort of masterpiece here, but it's much better than BEAM's), which Erlang sacrificed as part of its original plan. I understand why they have the type system they have, and what they got out of it, and I'd rather have a decent static type system and solve those problems another way, which happens to be the conclusion pretty much everyone else has come to as well. Again, genius thinking in the 1990s, way ahead of everyone else, don't let my current assessment of Erlang diminish the fact I deeply respect what it did in its time... but not a solution I have much interest in in 2021.


The idiomatic solution for GoLang would be to use kubernetes, but it will come with the increase in the operational complexity.


> The funny thing is that precisely what got me into Go was replacing an Erlang system that was constantly falling over despite quite considerable efforts

As the meme goes, Will Ferrel gets a deep puff of smoke from his cigarette and says "I don't believe you".

I agree that Go has better type system and Erlang/Elxir/BEAM languages in general. Absolutely. But I think you're showing bias and were already looking for an excuse not to use Erlang. That's completely fair but I think you are non-fairly misrepresenting the exact merits of the decision.

> This is part of what I mean; Erlang doesn't have access to special CPU instructions that make supervision trees possible. They're just code.

Everything is "just code", dude. Yours is no more an argument than technologically advanced aliens visiting us and exclaiming "How come you don't have super-alloys that can withstand atmosphere entry without losing atoms? It's just chemistry!"

---

I am getting the vibe that you're one of those super-programmers that can tinker with everything and the computers have no secrets from them. That, or you are bragging too much.

And you should know something else -- I am not a huge fan of Elixir these days. I work with it for 5 years but it's showing some cracks and lack of community (and core team) attention in critically important areas for the ecosystem's advancement -- like compiler instrumentation, or tooling to modify code in an automated way -- not to mention the dynamic typing. I get it, it's far from the magic many fans are making it out to be.

But you are discounting very real advantages in a very dismissive manner.

For the record, if one Rust runtime gains most of Erlang's OTP capabilities tomorrow then I'll switch to Rust for 100% of my work next week. But nobody has surpassed OTP's capabilities.

Finally, I'll also agree that we don't need 100% of OTP -- that much we are in complete agreement on. But as you yourself pointed out, cancelling running background threads is still mostly an unsolved problem so just scoffing at a technology that has mostly achieved that is a very uncharitable take that makes me question your other arguments and wonder if they are not emotional arguments.

I am by no means a super-programmer. On my non-humble bragging days I'd say that I'm only very slightly above average. But I've tried to duplicate OTP in at least 3 other languages and I failed miserably every time (Java was one of them). So yeah, don't just say "meh, I can invent OTP everywhere else". No. You really can't. If you can, open-source this effort and I'll donate, I promise.


the elixir community is quite credulous. particularly when it comes to circa 200X era wisdom. people talk about the vm being the key to elixir/erlang and talk up things like lightweight green threads, message passing and the garbage collector but the truth is these are all of fairly low quality compared to other competing languages/implementations

the real key for erlang and later elixir's success was pervasive async io. this was a genuine advantage during erlang's peak but languages, runtimes and libraries like go, node and nio have caught up and surpassed the erlang vm

the truth is without that advantage almost everything in the erlang/elixir world is worse than more mainstream alternatives. there's some exceptions -- ecto is pretty good because it has one of the best written connection pooling implementations i've ever seen. i think both languages are pretty good and i still write the occasional thing in erlang for my own satisfaction but the world has moved on and elixir and especially erlang haven't innovated enough to have kept up


> these are all of fairly low quality compared to other competing languages/implementations

Seriously? You say this without providing examples?

I'm not aware of any other programming environment that has BEAM-like processes.

Not Go, not Java, not any other system I know.


almost every language has lightweight cooperative threading (or green threads) available these days. go calls them goroutines, c# and ruby fibres (altho i think ruby removed them, ultimately?), python has threadless, rust has tokio, julia has tasks, the jvm has like 4 competing implementations in akka, kiilim, quasar and project loom. windows and linux both have built in os level support for cooperative multitasking (the fibre api and the context api, respectively) that any language can use

these all have slightly different semantics and characteristics, but it's simply not true that beam is doing anything unique here. pervasive links between processes are a somewhat interesting wrinkle of the beam implementation but even that is achievable in other languages with little work

what made green threads work in erlang (and later elixir) was async i/o. in other languages green threads would block on all i/o calls and had no opportunity to yield whereas on the erlang vm all i/o calls would effectively yield while waiting. today nearly every language has async i/o (in libraries if not pervasively) so green threads are much more accessible

i don't say this as an elixir hater or whatever. i genuinely respect erlang's place in history as a populizer of some of these concepts. when i say the elixir/erlang community is credulous and prone to exaggerating the differences between those languages and other more modern languages (particularly when it comes to implementation) i don't say it dismissively as a reason to abandon those languages. i say it because without an impetuous to keep improving elixir and erlang are going to become increasingly irrelevant. it would be a shame if the 'elixir is different and unique' attitude led to complacency and stagnation


I am not going to engage you on the details that you got wrong (Akka not covering 100% of OTP's guarantees is one example) but I'll just point out something else on a higher, more pragmatic level.

Of all 7-8 programming languages I worked actively with for my almost 20 years of career, only Elixir apps had predictable and stable latency even under load. So you know, at one point I stopped caring how does the BEAM do it or why aren't the other languages/runtimes doing it. I just started going to the technology that gives me this.

Is Go faster? Feck, absolutely yes! But its 95 percentile latency is spiking through the roof under load while a Phoenix/Ecto raises its median latency by no more than 20-30% (worst I've seen is +300% when from median latency 15ms the app went to 60ms and that's only in the 99th percentile of requests) even when the hardware is close to toppling over.

I feel that the raw muscle power of languages is vastly over-represented. I'd love to have a Rust with OTP's guarantees because at some places it's literally 1000x faster than Elixir, absolutely. But in a world where we have to choose raw power versus predictable performance (even if that performance is lower than what we can get in other languages) then I'll choose the latter any day.

And I am not alone in this. Many teams are choosing Elixir for exactly this reason.

---

One thing I'll agree with is that other languages have taken notice and are working hard to catch up with the OTP. I'd welcome them in the club once they are there because I hate language wars and I gauge technologies based on their merit. But they are still not there, sadly.


i'm not trying to convince you (or anyone, really) to stop using elixir. i am however encouraging you to engage deeper with the beam vm and it's actual properties and compare that honestly to what else is out there. what is it exactly about the beam vm and elixir that lets it achieve these latencies and why is this not achievable in other languages? simply saying the beam vm is better than other implementations isn't an answer to that question


Don't get me wrong, I'd LOVE doing that but my work time and employer priorities doesn't allow it yet (and might never). And I am starting to get really sick of extending my work time to my free time as well.


> c# and ruby fibres (altho i think ruby removed them, ultimately?

Ruby did not remove Fibers (in fact, they’ve been recently, in 3 0, enhanced to optionally be nonblocking—that is, automatically yielding on any operation that wpuld block.)

Ruby removed continuations from the core (moved them to stdlib) after adopting independent Fibers way back with 1.9; continuations were the previous mechanism for similar lightweight concurrency.


What about functions as a service, like AWS lambda?

If you use Node as an example, your code is JIT compiled to machine code, any single request can fail, and you can scale to any number of requests without thinking about the underlying OS or VM.

Async/await will allow you to do a “blocking receive” like Erlangs processes.


Nah, BEAM is not suited well for that. It has a big startup time. BEAM is designed for long-running daemons, not "wake up, do small amount of work, get shut down".

For something like this I'd choose Rust or OCaml due to their insanely fast cold startup time (if the program is a CLI tool).

Erlang/BEAM is not there yet and it might not soon be.


Wasn't suggesting to use the BEAM.

OP said he did not know any other "BEAM like environments", but AWS and GCP are "BEAM-like" systems in that they allow you to use distributed hardware computers to achieve scale and fault tolerance.


This sounds true on the surface and many people have argued that e.g. Kubernetes is "OTP but for distributed nodes" but I remain skeptical. The devil is always in the details and I haven't heard many people being very pleased with Kubernetes.

Admittedly Google's Cloud Run is very easy and nice to use though. And fairly cheap.


First of all, the other programming environment like it is Pony. Although that's barely a C-list language right now, very young, one to keep an eye on though.

I'd also cite the real technology I'd use today, which is a heterogenous set of services in whatever languages I'd like, hooked up by a high-quality message bus. This is the real technology that drives the at-scale Internet. You basically get Erlang's reliability out of that setup when used properly, and you don't need Erlang to do it. In fact you can get a touch more than Erlang's reliability because I find in practice 1-or-n delivery to be much more practical than Erlang's 0-or-1 delivery. It's basically the same enviromnent Erlang gives you integrated, except decoupled, and since all the pieces are decoupled, while Erlang has sat on the same effective point in this space that it picked out 25 years ago, all the decoupled components have been iterating and evolving over that time frame and are now better than what Erlang offers in its integrated package.

Second of all, if in 2021, around 25 years after Erlang and easily 15 years after Erlang has been generally known as a B-list language among language designers, almost nobody else has seen fit to copy it... maybe it isn't that great of an idea. Rust does something completely different, and in my opinion, strictly more useful, albeit at the cost to programmer complexity. I moved to Go from Erlang roughly 8 years ago, and I'm happier, because it turns out "general community good practices + channels" is fine, and also means I can go faster, and get a nicer language in the meantime. All the other modern languages are coming with some sort of concurrency story; it's table stakes for any language born in the last 10 years, if not the last 15.

For the mid 1990s, it was sheer genius. For 2021, it's a very brute-force, inelegant solution to the problem that nobody's very interested in copying. While in the 1990s concurrency was a nightmare and Erlang legitimately had a claim to a better solution, in 2021 there's a good 3 or 4 things I'd use before dropping back to Erlang as a solution. Concurrency is much less of a problem than it used to be, through a combination of various things, and the proposition of burning so much of a languages design budget on that problem is a lot less appealing than it was 30 years ago. Erlang really needs to adopt Go-like channels for some of what it's doing (not in replacement for the processes, but for some of the things they're not very good at), the ~10x slowdown for general logic is a real kick in the teeth in 2021, the lack of backpressure in the Erlang message model becomes a big problem at scale, and lots of other little problems I'd have if I had to go back to it. (Yes, I've been reading the release notes. If I weren't I'd have a couple more things to add.)

Erlang/Elixir/BEAM isn't leading the pack anymore. They're a cut behind in most ways now, but the community still thinks they are leading, ensuring that none of the lessons learned by other communities can filter back into the Erlang/Elixir/BEAM community.


Even if I disagreed with you on another comment I'll have to say that I find myself much more in agreement with you here.

Erlang / the BEAM did indeed make a lot of good innovations and I can only be angry at myself for being an idiot pressured by employers and never looking beyond it all for something better (until 5+ years ago anyway). But I agree that some of it is starting to show cracks.

In terms of language design, Erlang (and Elixir) aren't anything special. I can't fall in love with syntax anymore because I've literally never seen a language I completely like (LISP included, although it + OCaml are fairly close to ideal languages if you don't stray too much off of the beaten path and venture into their more arcane constructs, of which OCaml sadly has plenty).

To clarify, I believe Elixir is one of the most solid contenders for writing highly available and reasonably performant Web / GraphQL server apps but the lack of compiler apparatus tooling, tooling to modify AST and a few others are definitely starting to hurt it. Having standardized introspection in the language helps it reach higher levels, e.g. have tools that can manipulate an existing project a la like TreeSitter and/or SemGrep can modify/query language-specific constructs. Elixir doesn't have that and I am starting to get annoyed with it because of that.

RE: Using an external messaging bus makes sense but let me point out something important that seem to be often not said in discussions about Erlang / Elixir:

The BEAM gives you a lot of good training wheels and the truth is that at least 90% (if not 98%) of the commercial projects out there don't require much more than that. As shared in the other comment, I was able to get away with not using Redis for a long time and had zero trouble. I only yielded after we needed to share various message queues and events/streams with other apps (not written in a BEAM language).

So I'd say the BEAM ecosystem gives you a lot out of the box, plus the Elixir community is small but fairly dedicated and they have libraries of excellent quality. But, as you alluded to, when you need to throw those training wheels off, other much more dedicated and focused technologies like Redis do exist and we should reach for them after the circumstances change enough.

Would you agree with those assessments?


I'm not disagreeing with your results, but you should be using https://github.com/giltene/wrk2 based benchmarks to avoid coordinated omission errors in measuring latency.


The new JIT does improve the computational story a little, but yes it's a little like Python where you use it for orchestration and then offload the work. That said new systems like NX do make the 'offloading' part significantly cleaner for some applications.


Here’s a good one to get into deeper comparison with Python, Go and Elixir. It’s one of the few that I’ve seen that does a good job of showing more than just straight line speed.

https://medium.com/@marcelo_lebre/a-tale-of-three-kings-e0be...


Note that Erlang OTP 24 (the latest release) includes a JIT for the first time. It only runs on x64 but should significantly improve performance on that platform. For some workloads people are reporting as much as a 40% improvement. I would expect to see some improvement in those benchmarks as a result.

OTP 25 will also include JIT support for ARM64


Concurrency vs performance, not concurrency is performance.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: