Go is a particularly good language for CLI's I have found. At least compared to Java/C#/Python.
It's reasonably fast, compiles down to a simple to distribute binary, and the language is forgiving enough that you can do exploratory programming in it. Go-routines make it especially easy to deal with network calls in it as well. For anything that needs absolute performance though look elsewhere, but even then Go might be a good choice for prototyping.
I actually started learning Go with CLI applications. I have found that https://github.com/spf13/cobra tends to be one of the better CLI helpers you can get into but https://github.com/jpillora/opts is one I have been meaning to try following a presentation I saw on it once.
> [Go] is forgiving enough that you can do exploratory programming in it.
I agree with most of your post, but I’m not sure I would describe Go as “forgiving”. In fact, it’s well known for being strict. For example, exploratory programming would be significantly easier if the Go compiler could (optionally) ignore unused variables and unreachable code.
I’ve also found exploratory programming in Go is hindered by needing to frequently cast between different integer types. Maybe I’m doing something wrong, but my Go code is often littered with casts between different integer sizes and signedness. It’s much easier with Python’s arbitrary-precision integers.
package main
import "fmt"
func main() {
var start int = 1
breakOuter:
// for x := start; x < 10; x++ {
for x := 3; x < 10; x++ {
for y := 0; y < 10; y++ {
result := y * 10 + x
// fmt.Printf("Result: %d\n", result)
// if result == 42 {
// fmt.Printf("Breaking")
// break breakOuter
// }
}
}
}
Building:
$ go build
# example
./main.go:3:8: imported and not used: "fmt"
./main.go:8:5: label breakOuter defined and not used
(compilation fails)
$ go build -gcflags=-warnunused
# example
./main.go:8:5: Warning: label breakOuter defined and not used
./main.go:3:8: Warning: imported and not used: "fmt"
./main.go:6:9: Warning: start declared and not used
./main.go:12:13: Warning: result declared and not used
(compilation succeeds)
I had the same opinion you did until I went to a Go meetup and everyone else was a C programmer. Coming from Ruby, Go wasn't expressive or forgiving. But everyone coming from C thought it was wonderful.
Personally, I feel you with respect to casting integer types. I'll code away with int's until something suddenly needs an int64 to use a package and I have to cast everything or refactor everything to int64. I once commented in a thread where people were asked, "In hindsight what feature would you like in Go?" I said that int should just be an alias for int64 (and float == float64), since these were the defaults in the stdlib. I was downvoted into oblivion in a thread on hypotheticals. I understand the historical machine dependent 32/64 difference, but since the stdlib made a choice, the default should line up nicely. That said, I mostly run into this in Project Euler problems, so not in my day to day work.
> Maybe I’m doing something wrong, but my Go code is often littered with casts between different integer sizes and signedness.
It can actually be a feature, and one of the things that brought me to go, as you can define your own integer (or float, or string, etc.) types, thus making them incompatible with each other:
type distance int
type speed int
func distFor(d distance, t time.Time) speed { ... }
...
x = distFor(x, t) // Oops, that's probably a bug!
Not many languages let you do this, especially back then. But yeah, not great for exploratory programming.
This is probably the biggest thing I wish was easier to do in Rust. You can make "newtypes" by wrapping the value in a tuple (that gets compiled away), but it's a fair amount of boilerplate or macros to make the newtype useful. Once you have what you need, it's fine, but it's even less convenient for exploratory programming ;-)
> but it's a fair amount of boilerplate or macros to make the newtype useful.
As mentioned in the docs, you can use the Deref facility to have the newtype implement everything that the original type does. Rust just gives you the choice of doing this vs. wrapping with a custom set of impls.
I have to admit that I felt that way about it too. In fact, I think everybody felt that way about it at first. It was a work-around. The reason it was never changed, though, was that people didn't find anything significantly better. Rust is already a big language. It doesn't necessarily need new constructs.
However, as the OP of this thread, I have to admit that I often feel conflicted about it. Using the Deref trait certainly feels like another hack (and some people really dislike it). Basically Deref is what's used to dereference a reference. It's invoked automatically when you are calling a trait function on a variable. So if you have a variable that's a struct that implements a trait, it will bind to the function on that trait. If your variable contains a reference to the struct, then the compiler is smart enough to see if the struct implements the trait and then automatically uses the Deref train to derefernce the reference and bind to the function.
So if you want to delegate from one type to exactly one other type, you can do it simply by implementing the Deref trait on the first type and having it convert to the second type. It's kind of elegant, but also kind of hacky :-) There are people who feel that Deref should only be used for objects that are actually memory references. Other people feel that it's OK to use it when one type is masquerading as another type (as we are doing when we put a data structure inside a tuple to make it a "new type").
On the plus side, it gives you really fine grained control without adding new constructs to the language. You can make a "new type" that incurs no runtime overhead (in speed or space) and you can choose whether to delegate all of the function calls to the contained type, or to control them explicitly. The former is really, really easy (essentially 5 lines of boilerplate) and the latter is extremely easy to read and reason about. I have to admit that it's hard to justify adding a new construct for something that is not actually broken (except when you first look at it ;-) ).
And, really, to me this is Rust in a nutshell. It's got a lot of really elegant and intelligent decisions going into its implementation, but all of them look incredibly unlikely when you first look at them. The result is that the learning curve is quite high and the road to fluency long. Often newbies (which I probably still qualify for) ask, "Why the heck is it done this way". When you get the answer, it make sense, but it's often not as satisfying as you had initially hoped :-) Still, like others who push past that point, I've got to say that I really enjoy writing Rust code. It's strange.
Ok, let's say that it's more suitable to exploratory programming than other compiled languages? On one hand, yes it's strict about unused variables, but other features (e.g. fast compile times) more than make up for the disadvantages.
What I mean is that while the compiler is strict its fairly easy to throw something together that mostly works. The large standard library really helps there. It's certainly not as easy as in Python though.
Maybe I'm just unreasonable, but it always slightly bothered me when a JS or Python CLI utility has that half second of startup before doing anything, even displaying help. I can't be too annoyed, since in reality it's only a fraction of a second and they're spinning up the entire interpreter.
Single binary is also another one that really shouldn't matter to me, but still does. Especially for small utilities or web services, it's just really nice to know that I can `scp` to my server and just run it if I wanted to, even though, in reality, I always use a Docker container.
CLIs are sorta my ideal use-case for Go. Goroutines are so error-prone to control since you don't have many options for abstraction, so it's relatively difficult to build long-running highly-stable programs...
But CLIs don't usually need that. They can be ctrl-C'd if they go off the rails, and any dangling goroutines just die when the process dies. The simple distribution, fast startup, simple type system, and yolo-concurrency really pay off in pleasantly small and performant tools. And the stdlib is cross platform, quite capable, and very friendly to use. It's almost exactly what you want when you need to go beyond a tiny bash script, to something that might be a up to a couple thousand lines.
I do wish the built-in flags lib wasn't so abhorrent though. Pulling in a replacement lib is step 1 for any CLI.
"Goroutines are so error-prone to control since you don't have many options for abstraction, so it's relatively difficult to build long-running highly-stable programs..."
This comment does not really makes sense, Go #1 usage is for backend services, so it's indeed long running / stable program.
"They can be ctrl-C'd if they go off the rails, and any dangling goroutines just die when the process dies"
There are solution in Go to handle that case, using context and channel but ultimatly it's not a Go problem if you kill an app right away usually there is no way to clean-up everything in a clean way.
His comment makes sense. golang "goroutines" are finicky compared to other ways of doing concurrency (e.g. Futures/Tasks in Scala/C# or Java's upcoming green threads). There are several reasons to this, not limited to the fact that it's trivial to mistakingly ignore return values (especially errors) when executing `go foo()`. Also, there's a lot of boilerplate involved if you want to start several of them and wait until they're completed, or if you want to compose them. The other languages I mentioned, which have superior abstractions to golang, solve the issue in a much better way.
I'd also label it that much of go concurrency control is intrusive, in that you need to write control-flow code into whatever it is you're trying to accomplish, or (typically) give up type safety or understandability. That's unavoidably more error-prone than not writing your own control-flow code, and I spend quite a lot of time discovering and fixing issues around it in libraries, even from some very skilled and careful teams.
Of course there are ways to achieve both in any language, but the massive educational pressure of the official docs and tutorials and examples cannot be ignored, and has profound impacts on what the community ends up building in the language.
---
It's roughly equivalent to manual memory management, IMO. Terabytes have been spilled claiming that C is safe if you're careful enough or use safe patterns, and CVE after CVE provides evidence that nobody is sufficiently careful. It has its benefits, but it also has its downsides.
> Goroutines are so error-prone to control since you don't have many options for abstraction, so it's relatively difficult to build long-running highly-stable programs...
Oh? Building long-running microservices is literally my main use of Go, and in my opinion the language excels at it. What sort of issues are you having?
I'm not OP, but several come to mind: easy to mistakingly ignore return values (such as errors), difficult to compose, difficult to join, boiler plate heavy, no hierarchy/supervisor structuring.
As the typical SysAdmin who likes to automate stuff, I have to agree. I am not experienced enough to talk about language designs. But I can say, that writing some small CLI application and deploying it onto some server is way less work with go. Simply because you can crosscompile the application and generate a standalone binary.
Everytime I deploy some Python3.7 Flask Application on RHEL7 I start to scream.
You were talking about long running applications and broken goroutines: What is the best alternative? I liked to stumble around in Elixir but I feel bad about deploying some application probably nobody at my office will ever be able to 'fix/update'. Python feels too "sluggish". Since I am no Python Pro I have often the feeling, that I am doing something wrong because the language doesn't show any borders.
Alternative: I'd say Java (or similar), since it has the language abilities and sophisticated static analysis tools are readily available. But startup time is still not interactively-fast, so for CLIs it's sorta a no. For long-running processes tho I mostly like it, and stuff like compacting GCs keeps it running healthier much more easily than Go.
Beyond that, dunno - I usually reach for Python since I've written it professionally for a few years and it's pleasantly terse. But it's a fair bit of work to make actually fast and I don't generally think it's worth that effort. Go is much easier there... as long as it's kept simple.
I have some strong hope for Rust, but I think it's fair to label it as "still maturing", though it's already very far of ahead many langs in some areas. And I just don't have much experience with it yet, so have no real conclusions ¯\_(ツ)_/¯
They've been working on it for quite a while, yeah :) I appreciate it and they've made quite a lot of progress... but it's still nowhere near Go, and I mostly doubt it ever will be.
# time ./hellogo
hello world
real 0m0.005s
user 0m0.001s
sys 0m0.005s
golang's gc is non-compacting. I wouldn't be surprised if there are cases where fragmentation becomes too much for a golang service to continue behaving properly. This becomes more likely in long running services.
Interestingly my current CLI project https://github.com/boyter/cs/ does need to be stable because I am putting a TUI mode into it... and yes dealing with the dangling goroutines in it for the TUI mode itself is especially painful.
Generally if you just stick to fan-out-in processing though I find goroutines not too bad.
Check out structopt. It's a declarative layer atop clap. It's a bit polarising, but if you don't mind the "magic" (which you probably don't if you think clap looks nice) it's amazing.
Not sure why this is downvoted. Java, C#, and Python all tend to have slow CLIs. This is very much in-line with my experience. Even if the VMs do start up quickly, people tend to do a lot of expensive initialization, class loading, etc before the program starts. Go’s runtime is minimal by comparison and there is much less work done at initialization (by convention) than in Python, Java, etc.
I work on ML infrastructure (https://github.com/cortexlabs/cortex) and we originally wrote our CLI in Python. Rewriting it in Go has been a major win both in terms of performance and cross-platform support.
It's reasonably fast, compiles down to a simple to distribute binary, and the language is forgiving enough that you can do exploratory programming in it. Go-routines make it especially easy to deal with network calls in it as well. For anything that needs absolute performance though look elsewhere, but even then Go might be a good choice for prototyping.
I actually started learning Go with CLI applications. I have found that https://github.com/spf13/cobra tends to be one of the better CLI helpers you can get into but https://github.com/jpillora/opts is one I have been meaning to try following a presentation I saw on it once.