If you are building an application, you can and should know what your absolute time/power/cost budget is and evaluate the "speed" of operations against this absolute standard. It does not matter that some operation is 100x slower than it could be if it is still 10,000x faster than you need it to be.
But a lot of software engineering goes into building tools, libraries, frameworks, and systems, and even "application" code may be put to uses very distant from the originally envisioned one. And in these contexts, performance relative to the "speed of light" - the highest possible performance for a single operation - can be a very useful concept. Something "slow" that is 100x off the speed of light may be more than fast enough in some circumstances but a huge problem in others. Something "very fast" that is 1.01x the speed of light is very unlikely to be a big problem in any application. And this is true whether the speed of light for the operation in question is 1ns or 1min.
> Most people, most of the time, doing most web work, are so thoroughly outclassed on speed by their web framework and server that the speed of their choice is irrelevant. Which means they should be selecting based on all the other relevant features.
I disagree with that as the choice of framework doesn't impact just the request/response lifecycle but is crucial to the overall efficiency of the system because they lead the user down a more or less performant path. Frameworks are not just HTTP servers.
Choosing a web framework also marries you to a language, hence the upper ceiling of your application will be tied to how performant that language is. Taking the article's example, as your application grows and more and more code is in the hot path you can very easily get into a space where your requests that took 50ms now take 500ms.
You can, without loss of generality, simply insert "holding the language constant" if you like into the essay without losing the point I am trying to make, where the question of what language to choose is mostly orthogonal to the point I'm making in that particular essay.
In other contexts I'm a huge proponent of validating that your language is fast enough. There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution. Exceptions include "we were a startup at the time and experienced rather substantial growth" and the rare cases where technology X is just that much faster for some particular reason... though probably not being a "scripting language" as nowadays I'm not particularly convinced they're all that much faster to develop with past the first week, but something more like "X had a high-level but slow library that did 90% of what we needed but when we really, really needed that last 10% we had no choice but to spend person-years more time getting that last 10% of functionality, so we went with Y anyhow for the speed".
You make an assumption here that the software is also constant or static.
> There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution.
The language X was probably a good solution at first. Then the company started to increase its product surface or acquired enterprise customers. Now you have new workloads that were not considered and the language is no longer suited for it.
Most likely a decision is made to not introduce a second language to the company just for these new workloads as that complicates the architecture, not to mention hiring and managing, so you stay with language X and try to make do. Now you have language X doing more than it is suited for and response times often increase due to that.
This isn't really a case of startup growing pains, just that software itself cannot know ahead of time every application it'll have. You can choose a "mostly fast for all use cases" language and bet that your applications will fit those general use cases, this means you win small but also lose small.
> The language X was probably a good solution at first.
I would contest even that. Most of the time it's a fight or flight response by the devs, meaning that they just go with whatever they are most comfortable with.
In the previous years I made good money from switching companies away from Python and Rails, to Elixir and Golang. The gains were massive and maintainability also improved a lot.
Of course, this is not advocacy for these technologies in particular. Use the right tool for the job is a popular adage for good reasons. But my point is: people don't use the right tool for the job as often as many believe. Mostly it's gut feelings and familiarity.
And btw I am not trashing on overworked CTOs opting for Python because they never had the time to learn better web tech. I get their pain and sympathise with it a lot. But the failing of most startup CTOs that I observed was that they fell into control mania and micro-management as opposed to learning to delegate and trust. Sadly that too is a function of being busy and overworked so... I don't know. I feel for them but I still want better informed tech decisions being made. At my age and experience I am extremely tired and weary of seeing people make all the same mistakes every time.
Between Python and Elixir or Golang, I’ll stick with Python for an exploratory side project.
It’s just a better fit when you’re not quite sure what you’re building. You just gain more on the 99% of projects that never go anywhere than you lose on the one that you end up trying to turn into a real product. So calling them better web tech assumes a lot about the development process that isn’t guaranteed.
You might be proving my point here because Elixir is amazing for exploration. You literally generate a project and then can immediately fiddle with it to your heart's content in a REPL.
As said though, I don't judge people who go by familiarity. But one should keep an open mind, and learning some of the modern languages (like Elixir) is much less work than many believe.
A better web tech in this case refers to having the potential to scale far above what Python can offer + have a very good developer experience. To me those two are paramount.
You can do a similar Python REPL. So I’m really unsure what you mean here? I mean it’s a little better for displaying data structures but I never really found that particularly useful.
Okay. Not contesting that. Use what you like. My point here was that one does not have to choose between "easy to use and iterate with" and "future-proof". These days you can have both. Elixir, Golang and Rust seem to offer those. Python in my experience only has the former, in terms of performance that inevitably ends up being a bottleneck at one point.
It's also true that many projects will never hit that point. For those Python is just fine. But I prefer to cover my bases in the last years, and have not been disappointed by any of the 3 PLs above.
RE: your edit, Elixir's REPL allows modifying the app in-place but I have not worked with Python in a long time and it might have that as well. Can't remember. Also you can temporarily change an app in production which made fixing certain elusive bugs almost trivial, many times. As much as I love Golang and Rust they got nothing on Elixir's ability to fix your app literally in real time. Then when you are confident in the fix, you make the actual code change, merge it and deploy.
You can also squeeze performance out of most languages by knowing the bottlenecks and working around them. Even Go you can squeeze performance out of if you really need to and want to.
I like to characterize it as "the slowest language of the fastest class of languages". In general other compiled languages are faster, though generally we're talking 2 or 3 times at the most (Go doesn't optimize very hard but the language defaulting to unboxed values made up for a substantial proportion of that versus more box-heavy compiled languages), but Go is "generally" faster than nearly all non-compiled languages. "Generally" here means "on general code"; JIT languages can definitely outdo Go on heavily numeric code, even in scripting languages, because JITs are very good at that, but the sort of "general" code that isn't obviously numeric will be faster in Go than any non-compiled language.
This sort of complicated analysis doubles as another example of the difficulty of context-free "fast" and "slow" labels. Is Go "fast"? For a general programming language, yes, though not the fastest. If you reserve "fast" for C/C++/Rust, then no it is not fast. Is it fast compared to Python, though? Yes, it'll knock your socks off if you're a Python programmer even with just a single thread, let alone what you can do if you can get some useful parallel processing going.
Java is very fast. You just have to account for the amount of absolutely terrible written Java code full of 6 levels of inheritance, 6 levels of nested loops and AbstractFactoryFactoryFactoryProviderImpls out there to slow it down. I swear I have seen so much Java code that in the name of "abstraction" and "maintainability" they would take x + y and turn it into 8 classes and method calls.
I think that school of Java is on the way out. Since it's absurdly backward compatible there's a lot of factoryfactory code still lingering, but I don't see it being written a lot.
Though there are a lot of unfortunate "truths" in Java programming that seems to encourage malignant abstraction growth, such as "abstractions are free", and "C2 will optimize that for you". It's technically kinda mostly true, but you write better code if you think polymorphism is terribly slow like the C++ guys tend to do.
Modern Java, obviously is still objects everywhere, but the deep inheritance is really discouraged. Interfaces with default implementations, records, lambdas. There is just a lot that has moved the culture away from that style of programming, but not all places have moved.
@jerf beat me to it but indeed Golang is one of the very slowest compiled languages out there. I have an okay idea why but I wish someone there made a more serious effort to accelerate it at least by a factor of 2x.
I hate having to mull over the pros and cons of Rust for the 89th time when I know that if I make a service in Golang I'll be called in 3 months to optimise it. But multiple times now I have swallowed the higher complexity and initial slow building curve of Rust just so I don't have to go debug the mess that a few juniors left while trying to be clever in a Golang codebase.
> A software engineer may be slicing and dicing nanoseconds
People typically live only once, so I want to make the best use out of my time. Thus I would prefer to write (prototype) in ruby or python, before considering moving to a faster language (but often it is not worth it; at home, if a java executable takes 0.2 seconds to delete 1000 files and the ruby script takes 2.3 seconds, I really don't care, even more so as I may be multitasking and having tons of tabs open in KDE konsole anyway, but for a company doing business, speed may matter much more).
It is a great skill to be able to maximize for speed. Ideally I'd love to have that in the same language. I haven't found one that really manages to bridge the "scripting" world with the compiled world. Every time someone tries it, the language is just awful in its design. I am beginning to think it is just not possible.
In my experience, Ruby starts fast and does everything fast. But you can make a case against Ruby if you want, by making it do a lot of CPU work for a long time. Java may take some time to warm up and then it will destroy Ruby.
But why not simply write the code that needs to be fast in C and then use call it from Ruby?
> But why not simply write the code that needs to be fast in C and then use call it from Ruby?
Because often that's a can of worms and because people are not as good with C as they think they are, as evidenced by plenty of CVEs and the famous example of quadratic performance degradation in parsing a JSON file when the GTA V game starts -- something that a fan had to decompile and fix themselves.
For scripting these days I tend to author small Golang programs if bash gets unwieldy (which it quickly does; get to 100+ lines of script and you start hitting plenty of annoyances). Seems to work really well, plus Golang has community libraries that emulate various UNIX tools and I found them quite adequate.
But back to the previous comments, IMO both bash and Ruby are quite fine for basic scripting... if you don't care about startup time. I do care in some of my workflows, hence I made scripts that pipe various Golang / Rust programs to empower my flow. But again, for many tasks this is not needed.
> But why not simply write the code that needs to be fast in C and then use call it from Ruby?
From the things that have been coming out since YJIT has been in development and the core team members have been showing, that's not necessary. Methods that are written in pure ruby outperform C libraries called from Ruby due a variety of factors.
In the unix world, the usual philosphy is tko start with a script (bash, awk, perl,…), then move to C and the like when the usecase is understood enough. So it’s more like a switch from programs and ipc to libraries and function calls.
But there are stuff, you immediately know you want a program, but they’re likely to be related to stuff like pure algorithms, protocols and binary file formats
>number of orders of magnitude, but are doing engineering across the entire range.
This in a way highlights the knowledge gap that exists in American manufacturing. Physical parts are designed to work in terms of cycles, which can span both decades and milliseconds. Engines in particular need to work in terms of milliseconds and decades, but there are other vehicle parts such as airbags, pumps, and steering and suspension components that need to be designed for massive orders of magnitude.
Often, a user presented with a progress bar will wait much longer without frustration than a user without one will do. Sometimes, making the code faster is non-trivial and is not cost-effective when compared to making the user simply not complain.
Latency is weird in UX. Being too fast can also be jarring.
If you hit a button that's supposed to do something (e.g. "send email" or "commit changes") and the page loads too fast, say in 20ms, a lot of users panic because they think something is broken.
Very true. And it sort of indicates that it is broken or at least unusual. If sending an email at least means "my email server has it now", then 20ms for that to happen would be a very unusual setup.
So if the dialog closes in 20ms if likely means the message was queued internally by the email client and then I would be worried that the queue will not be processed for whatever reason.
Yeah it's usually a problem with asynchronous UIs. You basically need to simulate a synchronous UI to make the interface seem reliable.
The file copy dialog in modern windows versions also has (had) this weird disconnect between the progress it's reporting and what it's actually doing. Seems very clear one thread is copying and one is updating the UI, and the communication between the two seems oddly delayed and inaccurate.
The progress reporting is very bizarre and sometimes the copying doesn't seem to start immediately. It feels markedly flakey.
Similar to the fact that humans need enlarged, human scale-size input and output mechanisms (keyboard, mouse, smartphones, control panel buttons in cockpit). The actual meat of the computation can be packaged in a nicely small form factor.
On the subject of human comprehension of ten orders of magnitude:
Pretty often you have a hot path that looks like a matmul routine that does X FMAs, a physics step that takes Y matmuls, a simulation that takes Z physics steps, an optimizer that does K simulations. As a result, estimating performance across 10 orders of magnitude is just adding the logs of 4 numbers, which pretty well works out as “Count up the digits in XYZK, don’t get to 10” which is perfectly manageable to intuit
I think people tend to overestimate how much certain choices matter for performance. But I don't agree that the speed of frameworks doesn't matter in most cases. To me the base performance of such a framework is essentially like a leaky abstraction. The moment I hit any bottleneck I suddenly need to understand how the framework works internally to work around it.
I'm unlikely to get bottlenecked on well written and idiomatic code in a slower framework. But I'm much more likely to accidentally do something very inefficient in such a framework and then hit a bottleneck.
I also think the difference in ergonomics and abstraction are not that huge between "slow" and "fast" frameworks. I don't think ASP.NET Core for example is significantly less productive than the web frameworks in dynamic languages if you know it.
I think the problem with optimization as an approach to getting a program to be fast is that it's fairly often not bottlenecks that are the problem, but architectural choices that don't show up in the profiler's flame charts.
Even if you find a slow function that constitutes 20% of the runtime, and optimize the living hell out of it to cut out 20% of the execution time, guess what your program is now about 4.1% faster.
Applies to databases too. You'll find a lot of claims that Postgres is "fast" without defining what exactly that means. In certain industries 1M+ row inserts per second for a single indexed table are minimum specs, which is out of reach for something like Postgres.
I mean, sure. But it is totally worth knowing what you have to wait for, versus what you can do right away.
I honestly don't know if async makes this easier or harder. It makes it easier to write sections of code that may have to wait for several things. It seems to make it less likely to write code that will kick off several things that can then be acted on independently when they arrive.
Latency and throughput are not interchangeable, and it is entirely normal to do millions of subroutine calls per HTTP request, so even if latency is your only concern, a subroutine call operation with a latency ten thousand dollars faster than the HTTP request might still be too slow.
Latency is not interchangeable with throughput because, if your single 8-core server needs to serve 200 HTTP requests per second, you need to spend less than 40 core-milliseconds per request on average, no matter whether the HTTP clients are 1ms away or 1000ms away.
Yes, at the abstraction level of money, many more things become interchangeable, but in order to have something worth spending money on, we have to distinguish between those things.
The problem with that context is that it is neither good, nor useful.
For good, an example that perhaps I should lift into the essay itself is probably more useful than an explanation. A year or two or so back, there was some article about the details of CGo or something like that. In the comments there was someone who was being quite a jerk about how much faster and better Python's C integration was. This person made several comments and was doing the whole "reply to literally everyone who disagrees", insulting the Go designers, etc. until finally someone put together the obvious microbenchmark and lo, Go was something like 25% faster than Python. Not blazingly faster, but being faster at all rather wrecked the thesis. Nor would it particularly matter that "this was a microbenchmark and those don't prove anything" as clearly the belief was that CGo was something like an order of magnitude slower if not more so even a single microbenchmark where Go won was enough to prove the point.
While the being a jerk bit was uncalled for, I don't blame the poster for the original belief though. Go programmers refer to CGo as "slow". Python programmers refer to their C integration as "fast". It is a plainly obvious conclusion from such characterizations that the Python integration is faster than Go.
Only someone being far, far more careful with their uses of "fast" and "slow" than I am used to seeing in programming discussions would pick up on the mismatch in contexts there. As such, I don't think that's a particularly good context. People who use it do not seem to have a generally unified scale of "fast" and "slow" that is even internally consistent, but rather a mismash of relatively inconsistent time scales (and that's not "relatively inconsistent" as in "sort of inconsistent" but "inconsistent relative to each other" [1]), thus making "fast" and "slow" observably useless to compare between any of them.
For useful, I would submit to you that unless you are one of the rare exceptions that we read about with those occasional posts where someone digs down to the very guts of Windows to issue Microsoft a super precise bug report about how it is handling semaphores or something, no user has ever come up to you and said that your software is fast or slow because it uses CGo, or any equivalent statement in any other language. That's not an acceptance criterion of any program at a user level. It doesn't matter if "CGo is slow" if your program uses it twice. The default context you are alluding to is a very low level engineering consideration at most but not something that is on its own fast or slow.
A good definition of fast or slow comes from somewhere else, and maybe after a series of other engineering decisions may work its way down to the speed of CGo in that specific context. 99%+ of the time, the performance of the code will not get worked down to that level. We are blinded by the exceptions when it happens but the vast majority of the time it does not.
By this, I mean it is an engineering mistake, albeit a very common one, to obsess in this "default context" about whether this technology or that technology is fast or slow. Programmers do this all the time. It is a serious, potentially project-crashing error. You need to start in the contexts that matter and work your way down as needed, only rarely reaching this level of detail at all. As such, this "default context" should be discarded out of your mind; it really only causes you trouble and failure as you ignore the contexts that really matter.
[1]: Of the various changes to English as she is spoken over the last couple of centuries, one of my least favorite is how a wide variety of useful words that used to have distinct meanings are now just intensifiers.
If you are building an application, you can and should know what your absolute time/power/cost budget is and evaluate the "speed" of operations against this absolute standard. It does not matter that some operation is 100x slower than it could be if it is still 10,000x faster than you need it to be.
But a lot of software engineering goes into building tools, libraries, frameworks, and systems, and even "application" code may be put to uses very distant from the originally envisioned one. And in these contexts, performance relative to the "speed of light" - the highest possible performance for a single operation - can be a very useful concept. Something "slow" that is 100x off the speed of light may be more than fast enough in some circumstances but a huge problem in others. Something "very fast" that is 1.01x the speed of light is very unlikely to be a big problem in any application. And this is true whether the speed of light for the operation in question is 1ns or 1min.
> Most people, most of the time, doing most web work, are so thoroughly outclassed on speed by their web framework and server that the speed of their choice is irrelevant. Which means they should be selecting based on all the other relevant features.
I disagree with that as the choice of framework doesn't impact just the request/response lifecycle but is crucial to the overall efficiency of the system because they lead the user down a more or less performant path. Frameworks are not just HTTP servers.
Choosing a web framework also marries you to a language, hence the upper ceiling of your application will be tied to how performant that language is. Taking the article's example, as your application grows and more and more code is in the hot path you can very easily get into a space where your requests that took 50ms now take 500ms.
You can, without loss of generality, simply insert "holding the language constant" if you like into the essay without losing the point I am trying to make, where the question of what language to choose is mostly orthogonal to the point I'm making in that particular essay.
In other contexts I'm a huge proponent of validating that your language is fast enough. There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution. Exceptions include "we were a startup at the time and experienced rather substantial growth" and the rare cases where technology X is just that much faster for some particular reason... though probably not being a "scripting language" as nowadays I'm not particularly convinced they're all that much faster to develop with past the first week, but something more like "X had a high-level but slow library that did 90% of what we needed but when we really, really needed that last 10% we had no choice but to spend person-years more time getting that last 10% of functionality, so we went with Y anyhow for the speed".
You make an assumption here that the software is also constant or static.
> There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution.
The language X was probably a good solution at first. Then the company started to increase its product surface or acquired enterprise customers. Now you have new workloads that were not considered and the language is no longer suited for it.
Most likely a decision is made to not introduce a second language to the company just for these new workloads as that complicates the architecture, not to mention hiring and managing, so you stay with language X and try to make do. Now you have language X doing more than it is suited for and response times often increase due to that.
This isn't really a case of startup growing pains, just that software itself cannot know ahead of time every application it'll have. You can choose a "mostly fast for all use cases" language and bet that your applications will fit those general use cases, this means you win small but also lose small.
> The language X was probably a good solution at first.
I would contest even that. Most of the time it's a fight or flight response by the devs, meaning that they just go with whatever they are most comfortable with.
In the previous years I made good money from switching companies away from Python and Rails, to Elixir and Golang. The gains were massive and maintainability also improved a lot.
Of course, this is not advocacy for these technologies in particular. Use the right tool for the job is a popular adage for good reasons. But my point is: people don't use the right tool for the job as often as many believe. Mostly it's gut feelings and familiarity.
And btw I am not trashing on overworked CTOs opting for Python because they never had the time to learn better web tech. I get their pain and sympathise with it a lot. But the failing of most startup CTOs that I observed was that they fell into control mania and micro-management as opposed to learning to delegate and trust. Sadly that too is a function of being busy and overworked so... I don't know. I feel for them but I still want better informed tech decisions being made. At my age and experience I am extremely tired and weary of seeing people make all the same mistakes every time.
Between Python and Elixir or Golang, I’ll stick with Python for an exploratory side project.
It’s just a better fit when you’re not quite sure what you’re building. You just gain more on the 99% of projects that never go anywhere than you lose on the one that you end up trying to turn into a real product. So calling them better web tech assumes a lot about the development process that isn’t guaranteed.
You might be proving my point here because Elixir is amazing for exploration. You literally generate a project and then can immediately fiddle with it to your heart's content in a REPL.
As said though, I don't judge people who go by familiarity. But one should keep an open mind, and learning some of the modern languages (like Elixir) is much less work than many believe.
A better web tech in this case refers to having the potential to scale far above what Python can offer + have a very good developer experience. To me those two are paramount.
You can do a similar Python REPL. So I’m really unsure what you mean here? I mean it’s a little better for displaying data structures but I never really found that particularly useful.
Okay. Not contesting that. Use what you like. My point here was that one does not have to choose between "easy to use and iterate with" and "future-proof". These days you can have both. Elixir, Golang and Rust seem to offer those. Python in my experience only has the former, in terms of performance that inevitably ends up being a bottleneck at one point.
It's also true that many projects will never hit that point. For those Python is just fine. But I prefer to cover my bases in the last years, and have not been disappointed by any of the 3 PLs above.
RE: your edit, Elixir's REPL allows modifying the app in-place but I have not worked with Python in a long time and it might have that as well. Can't remember. Also you can temporarily change an app in production which made fixing certain elusive bugs almost trivial, many times. As much as I love Golang and Rust they got nothing on Elixir's ability to fix your app literally in real time. Then when you are confident in the fix, you make the actual code change, merge it and deploy.
Agreed, some languages and frameworks have trouble scaling https://news.ycombinator.com/item?id=45950542
You can also squeeze performance out of most languages by knowing the bottlenecks and working around them. Even Go you can squeeze performance out of if you really need to and want to.
Go is far from the slowest language even though it has GC.
I like to characterize it as "the slowest language of the fastest class of languages". In general other compiled languages are faster, though generally we're talking 2 or 3 times at the most (Go doesn't optimize very hard but the language defaulting to unboxed values made up for a substantial proportion of that versus more box-heavy compiled languages), but Go is "generally" faster than nearly all non-compiled languages. "Generally" here means "on general code"; JIT languages can definitely outdo Go on heavily numeric code, even in scripting languages, because JITs are very good at that, but the sort of "general" code that isn't obviously numeric will be faster in Go than any non-compiled language.
This sort of complicated analysis doubles as another example of the difficulty of context-free "fast" and "slow" labels. Is Go "fast"? For a general programming language, yes, though not the fastest. If you reserve "fast" for C/C++/Rust, then no it is not fast. Is it fast compared to Python, though? Yes, it'll knock your socks off if you're a Python programmer even with just a single thread, let alone what you can do if you can get some useful parallel processing going.
If we allow for warmup, Java is pretty fast for many workloads.
https://madnight.github.io/benchmarksgame/go.html
Java is very fast. You just have to account for the amount of absolutely terrible written Java code full of 6 levels of inheritance, 6 levels of nested loops and AbstractFactoryFactoryFactoryProviderImpls out there to slow it down. I swear I have seen so much Java code that in the name of "abstraction" and "maintainability" they would take x + y and turn it into 8 classes and method calls.
I think that school of Java is on the way out. Since it's absurdly backward compatible there's a lot of factoryfactory code still lingering, but I don't see it being written a lot.
Though there are a lot of unfortunate "truths" in Java programming that seems to encourage malignant abstraction growth, such as "abstractions are free", and "C2 will optimize that for you". It's technically kinda mostly true, but you write better code if you think polymorphism is terribly slow like the C++ guys tend to do.
Yeah, I have been disenfranchised by this style of programming. Its why I prefer languages like Python, when you use objects / classes as needed.
Modern Java, obviously is still objects everywhere, but the deep inheritance is really discouraged. Interfaces with default implementations, records, lambdas. There is just a lot that has moved the culture away from that style of programming, but not all places have moved.
@jerf beat me to it but indeed Golang is one of the very slowest compiled languages out there. I have an okay idea why but I wish someone there made a more serious effort to accelerate it at least by a factor of 2x.
I hate having to mull over the pros and cons of Rust for the 89th time when I know that if I make a service in Golang I'll be called in 3 months to optimise it. But multiple times now I have swallowed the higher complexity and initial slow building curve of Rust just so I don't have to go debug the mess that a few juniors left while trying to be clever in a Golang codebase.
> A software engineer may be slicing and dicing nanoseconds
People typically live only once, so I want to make the best use out of my time. Thus I would prefer to write (prototype) in ruby or python, before considering moving to a faster language (but often it is not worth it; at home, if a java executable takes 0.2 seconds to delete 1000 files and the ruby script takes 2.3 seconds, I really don't care, even more so as I may be multitasking and having tons of tabs open in KDE konsole anyway, but for a company doing business, speed may matter much more).
It is a great skill to be able to maximize for speed. Ideally I'd love to have that in the same language. I haven't found one that really manages to bridge the "scripting" world with the compiled world. Every time someone tries it, the language is just awful in its design. I am beginning to think it is just not possible.
In my experience, Ruby starts fast and does everything fast. But you can make a case against Ruby if you want, by making it do a lot of CPU work for a long time. Java may take some time to warm up and then it will destroy Ruby.
But why not simply write the code that needs to be fast in C and then use call it from Ruby?
> But why not simply write the code that needs to be fast in C and then use call it from Ruby?
Because often that's a can of worms and because people are not as good with C as they think they are, as evidenced by plenty of CVEs and the famous example of quadratic performance degradation in parsing a JSON file when the GTA V game starts -- something that a fan had to decompile and fix themselves.
For scripting these days I tend to author small Golang programs if bash gets unwieldy (which it quickly does; get to 100+ lines of script and you start hitting plenty of annoyances). Seems to work really well, plus Golang has community libraries that emulate various UNIX tools and I found them quite adequate.
But back to the previous comments, IMO both bash and Ruby are quite fine for basic scripting... if you don't care about startup time. I do care in some of my workflows, hence I made scripts that pipe various Golang / Rust programs to empower my flow. But again, for many tasks this is not needed.
> But why not simply write the code that needs to be fast in C and then use call it from Ruby?
From the things that have been coming out since YJIT has been in development and the core team members have been showing, that's not necessary. Methods that are written in pure ruby outperform C libraries called from Ruby due a variety of factors.
In the unix world, the usual philosphy is tko start with a script (bash, awk, perl,…), then move to C and the like when the usecase is understood enough. So it’s more like a switch from programs and ipc to libraries and function calls.
But there are stuff, you immediately know you want a program, but they’re likely to be related to stuff like pure algorithms, protocols and binary file formats
>number of orders of magnitude, but are doing engineering across the entire range.
This in a way highlights the knowledge gap that exists in American manufacturing. Physical parts are designed to work in terms of cycles, which can span both decades and milliseconds. Engines in particular need to work in terms of milliseconds and decades, but there are other vehicle parts such as airbags, pumps, and steering and suspension components that need to be designed for massive orders of magnitude.
Often, a user presented with a progress bar will wait much longer without frustration than a user without one will do. Sometimes, making the code faster is non-trivial and is not cost-effective when compared to making the user simply not complain.
Latency is weird in UX. Being too fast can also be jarring.
If you hit a button that's supposed to do something (e.g. "send email" or "commit changes") and the page loads too fast, say in 20ms, a lot of users panic because they think something is broken.
Very true. And it sort of indicates that it is broken or at least unusual. If sending an email at least means "my email server has it now", then 20ms for that to happen would be a very unusual setup.
So if the dialog closes in 20ms if likely means the message was queued internally by the email client and then I would be worried that the queue will not be processed for whatever reason.
Yeah it's usually a problem with asynchronous UIs. You basically need to simulate a synchronous UI to make the interface seem reliable.
The file copy dialog in modern windows versions also has (had) this weird disconnect between the progress it's reporting and what it's actually doing. Seems very clear one thread is copying and one is updating the UI, and the communication between the two seems oddly delayed and inaccurate.
The progress reporting is very bizarre and sometimes the copying doesn't seem to start immediately. It feels markedly flakey.
Similar to the fact that humans need enlarged, human scale-size input and output mechanisms (keyboard, mouse, smartphones, control panel buttons in cockpit). The actual meat of the computation can be packaged in a nicely small form factor.
Is there a term for this kind of psychologically-targeted UX design?
For example, having a faster-spinning progress wheel makes users feel like the task is completed faster even if the elapsed time is the same.
On the subject of human comprehension of ten orders of magnitude:
Pretty often you have a hot path that looks like a matmul routine that does X FMAs, a physics step that takes Y matmuls, a simulation that takes Z physics steps, an optimizer that does K simulations. As a result, estimating performance across 10 orders of magnitude is just adding the logs of 4 numbers, which pretty well works out as “Count up the digits in XYZK, don’t get to 10” which is perfectly manageable to intuit
I think people tend to overestimate how much certain choices matter for performance. But I don't agree that the speed of frameworks doesn't matter in most cases. To me the base performance of such a framework is essentially like a leaky abstraction. The moment I hit any bottleneck I suddenly need to understand how the framework works internally to work around it.
I'm unlikely to get bottlenecked on well written and idiomatic code in a slower framework. But I'm much more likely to accidentally do something very inefficient in such a framework and then hit a bottleneck.
I also think the difference in ergonomics and abstraction are not that huge between "slow" and "fast" frameworks. I don't think ASP.NET Core for example is significantly less productive than the web frameworks in dynamic languages if you know it.
I think the problem with optimization as an approach to getting a program to be fast is that it's fairly often not bottlenecks that are the problem, but architectural choices that don't show up in the profiler's flame charts.
Even if you find a slow function that constitutes 20% of the runtime, and optimize the living hell out of it to cut out 20% of the execution time, guess what your program is now about 4.1% faster.
Applies to databases too. You'll find a lot of claims that Postgres is "fast" without defining what exactly that means. In certain industries 1M+ row inserts per second for a single indexed table are minimum specs, which is out of reach for something like Postgres.
I mean, sure. But it is totally worth knowing what you have to wait for, versus what you can do right away.
I honestly don't know if async makes this easier or harder. It makes it easier to write sections of code that may have to wait for several things. It seems to make it less likely to write code that will kick off several things that can then be acted on independently when they arrive.
More useful words are "negligible" and "problematic".
> The question I’m talking about here is, is “slow”, as a word standing by itself with no further characterization, even an applicable concept?
Yes, because there's usually context. To use his cgo example, cgo is slow compared to C->C and Go->Go function calls.
More important, IMO, are the non-local effects of using CGO (vs. eg. Go asm), and which are harder to understand.
But what matters here is probably whether it's slow compared to the code being invoked via cgo, which is only true for pretty fine-grained APIs.
I mean, probably what really matters is how slow things are compared to the network latency of a HTTP request.
In web-development arguing about Go-Go vs CGo-Go times is probably inconsequential.
Latency and throughput are not interchangeable, and it is entirely normal to do millions of subroutine calls per HTTP request, so even if latency is your only concern, a subroutine call operation with a latency ten thousand dollars faster than the HTTP request might still be too slow.
Latency is not interchangeable with throughput because, if your single 8-core server needs to serve 200 HTTP requests per second, you need to spend less than 40 core-milliseconds per request on average, no matter whether the HTTP clients are 1ms away or 1000ms away.
"ten thousand dollars faster"?
Yes, at the abstraction level of money, many more things become interchangeable, but in order to have something worth spending money on, we have to distinguish between those things.
The problem with that context is that it is neither good, nor useful.
For good, an example that perhaps I should lift into the essay itself is probably more useful than an explanation. A year or two or so back, there was some article about the details of CGo or something like that. In the comments there was someone who was being quite a jerk about how much faster and better Python's C integration was. This person made several comments and was doing the whole "reply to literally everyone who disagrees", insulting the Go designers, etc. until finally someone put together the obvious microbenchmark and lo, Go was something like 25% faster than Python. Not blazingly faster, but being faster at all rather wrecked the thesis. Nor would it particularly matter that "this was a microbenchmark and those don't prove anything" as clearly the belief was that CGo was something like an order of magnitude slower if not more so even a single microbenchmark where Go won was enough to prove the point.
While the being a jerk bit was uncalled for, I don't blame the poster for the original belief though. Go programmers refer to CGo as "slow". Python programmers refer to their C integration as "fast". It is a plainly obvious conclusion from such characterizations that the Python integration is faster than Go.
Only someone being far, far more careful with their uses of "fast" and "slow" than I am used to seeing in programming discussions would pick up on the mismatch in contexts there. As such, I don't think that's a particularly good context. People who use it do not seem to have a generally unified scale of "fast" and "slow" that is even internally consistent, but rather a mismash of relatively inconsistent time scales (and that's not "relatively inconsistent" as in "sort of inconsistent" but "inconsistent relative to each other" [1]), thus making "fast" and "slow" observably useless to compare between any of them.
For useful, I would submit to you that unless you are one of the rare exceptions that we read about with those occasional posts where someone digs down to the very guts of Windows to issue Microsoft a super precise bug report about how it is handling semaphores or something, no user has ever come up to you and said that your software is fast or slow because it uses CGo, or any equivalent statement in any other language. That's not an acceptance criterion of any program at a user level. It doesn't matter if "CGo is slow" if your program uses it twice. The default context you are alluding to is a very low level engineering consideration at most but not something that is on its own fast or slow.
A good definition of fast or slow comes from somewhere else, and maybe after a series of other engineering decisions may work its way down to the speed of CGo in that specific context. 99%+ of the time, the performance of the code will not get worked down to that level. We are blinded by the exceptions when it happens but the vast majority of the time it does not.
By this, I mean it is an engineering mistake, albeit a very common one, to obsess in this "default context" about whether this technology or that technology is fast or slow. Programmers do this all the time. It is a serious, potentially project-crashing error. You need to start in the contexts that matter and work your way down as needed, only rarely reaching this level of detail at all. As such, this "default context" should be discarded out of your mind; it really only causes you trouble and failure as you ignore the contexts that really matter.
[1]: Of the various changes to English as she is spoken over the last couple of centuries, one of my least favorite is how a wide variety of useful words that used to have distinct meanings are now just intensifiers.