We've been impacted by this. I migrated our services to Python 3.14 so we could attach profilers during runtime.
A couple of services looked like they had a memory leak. Memory was continuously increasing over time. Thanks to Python 3.14, we were able to use memray to understand what was going on. Those services were recreating HTTP clients (aiohttp) for every inbound request, and memory allocated by the downstream SSL lib was growing faster than it was being released.
We ended up rolling back to 3.13, which fixed the issue. I'll try again with 3.14.5.
If you are using "httpx", it's likely caused by a reference cycle. I made a PR to fix it but the maintainers haven't applied it. :-( https://github.com/encode/httpx/pull/3733
The reference cycle httpx creates is kind of a worst-case scenario for the incremental GC issue. Both the generational (3.13 and older) and incremental GC are triggered by the net new "container" objects (objects that have references to others, like lists and not like ints and floats). The short summary is that you need to create more container objects before the incremental GC triggers. In the case of the httpx reference cycle, you have a relatively small number of container objects hanging on to a lot of memory, due the SSL context data (which is a big memory hog).
Reverting back to the generational GC was the wise thing to do, even though it's a bit scary to do in a bugfix release. The incremental GC works for most people but in the minority of cases it doesn't, it uses quite a lot more memory. I'm pretty sure with some additional tuning, the incremental GC would be fine too but it just didn't get that tuning. The generational GC has literal decades of real-world use (Guido merged my patch on Jun 2000, Tim Peters did a bunch of tuning after that to optimize it).
Unfortunately, you may be the wrong gender to contribute to Encode repositories like httpx:
> I've closed off access to issues and discussions.
> I don't want to continue allowing an online environment with such an absurdly skewed gender representation. I find it intensely unwelcoming, and it's not reflective of the type of working environments I value.
We've been chasing down similar aiohttp client creation issues (liked to ...aiobotocore usage) for months now.
It's annoying that somehow talking to S3 etc requires so much churn. We have been trying to cache session objects and the like but clearly are still missing something.
Chasing this down has also made me realize how little Python libs use `weakref`, and just will build up so many circular references. The other day I figured out Django request's session infrastructure creates a circular reference meaning that requests have to get GC'd to get cleaned up in CPython.
I have a suspicion that the 3.14 problems are heavily linked to "real" workloads being almost entirely filled with cyclical objects.
It's really fascinating to read this, since I've encountered similar memory issues in other languages (ruby, go, etc.). Debugging these issues is a pain.
Is there a way to make all this much easier to debug and to prevent memory issues in the first place? Is the abstraction level not quite right?
On profilers - profiling will come in 3.15, are you referring to remote exec? It is a great feature I am very exited about, at the same time afraid that the company won’t allow ptrace capability in prod.
yes. remote exec allows me to attach profilers (e.g. memray) directly into a running process. i'm also excited about the upcoming statistical (cpu) profiler from 3.15
"Python 3.14 shipped with a new incremental garbage collector. However, we’ve had a number of reports of significant memory pressure in production environments.
We’ve decided to revert it in both 3.14 and 3.15, and go back to the generational GC from 3.13."
The main benefit of python to me is that while slow, it's predictable. I do think they're going to get a lot more resistance to adding JITs, moving GCs, etc. it will become java with a million knobs to tune. If people want a JIT'd python just use pypy, right?
Java lost almost all those knobs a while ago (I mean they're there, but you're better off relying on the defaults). The modern GCs have one or at most two knobs remaining, and even that will become unnecessary next year. As to predictablity, you get maximal pause time of well under 1ms for heaps up to 16TB.
> Lately, they seems to work with CRIU, various heuristics, multi-stage in-process bytecode compilation ..
Not sure what you mean by this, as this has nothing to do with GC, and Java has had a multi-tier optimising compiler for 15 years now.
> that nobody else have, so fixes are available
Go has much worse problems with GC than Java does these days, and nobody else is able to achieve similar performance in large programs with heavy workloads. So everyone else lives with less sophisticated compilers and memory management simply by accepting worse performance.
> Compared to Python's, all of them are beyond perfect.
I somehow understand the situation less after reading this.
Is Python's GC bad, or are there cyclic reference issues? Is it possible to detect cyclic references perfectly? What does beyond perfect mean? If we have 7 and 0.1% of the time you need one of the 6 that is non-default, how do we choose? Is the understated version of "Compared to Python's, all of them are beyond perfect" "I think Java's are great"? If not, what about Python's impl makes it so lackluster to any of 7 of Java's?
3. ZGC: low latency (<<1ms maximal pause, i.e. effectively pauseless)
4. G1 (the current default): A balanced mix of throughput and latency.
These are all the standard GCs (the seven you mentioned include a GC similar to Go's that was removed years ago, an "no op" GC for benchmarking hidden behind a development flag, and alternative implementations by different companies to some of the ones above).
It's possible that either Serial or Parallel will be removed when G1 is able to fully replace them.
Now, why do users need options? Because Java runs most of the world's finance, manufacturing, shipping and logistics, telecommunication, travel, healthcare, retail, defence, and government. We're talking large, complex software that handles huge workloads, and the needs vary. What works well enough for a CLI dev tool or a simple website is often not good enough to handle the world's credit card transaction processing or mobile phone networks.
> If not, what about Python's impl makes it so lackluster to any of 7 of Java's?
Java's GCs are moving collectors, which offer advantages not just compared to Python's GC but to all memory management strategies. Memory management (even in C) imposes a CPU/RAM tradeoff. Moving collectors (used in Java, .NET, and V8) give you a knob for controlling the tradeoff, i.e. they're able to convert RAM to CPU (i.e. use RAM chips as a hardware accelerator) and vice-versa.
> Is Python's GC bad, or are there cyclic reference issues?
Unless you're being pedantic and including reference counting without cycle detection as GC, if your GC has cyclic reference issues, your GC is bad.
> Is it possible to detect cyclic references perfectly?
Yes? That's the entire point of tracing GC. You have some set of root objects that you start with (globals, objects on thread stacks, etc.) and then you mark every object that's reachable from them. Anything that's not reachable is garbage, even if there are cycles within them.
>Is Python's GC bad, or are there cyclic reference issues?
Both can be true. The first can even be wholly or partly due to the second.
On addition, the way it does it via RC causes fragmentation, poor locality for caches, and general slowness for mass allocations. And it's one-size-fits-all.
Java has a much larger selection to pick to finetune specific use cases, which each being far greater for that use case. And the default no-need-to-think one (G1 iirc), is already faster and better than Python's.
Are you not confusing GC (freeing memory) with the memory allocator ?
Memory allocator: tcmalloc, jemalloc, they are concerned with fetching (and releasing) pages of memory from the OS and allocating objects for the program
GC is only responsible for saying to the memory allocator "this object is no longer used"
>Are you not confusing GC (freeing memory) with the memory allocator ?
No, you're missing the fact that the allocation of memory and the GC go hand in hand, because you need it so for optimizations. They are designed together to cooperate in modern runtimes.
PyPy is not looking healthy right now - it's several versions behind in support and, while it's not dead, it looks like it might be settling down for a rest.
Obviously it's not easy to move the whole language of a big codebase, but I feel a lot of this stuff (fiddling with GC, JITing, type hints, and I'm dubious about the free-threading stuff) tries to take Python somewhere it isn't really good at, and if that's what you want, you really want a different language.
Libraries. I use both languages, and a survey of what libraries are available is part of picking an implementation language when starting a greenfield project.
$ time ./a.out
real 0m0.002s
user 0m0.000s
sys 0m0.002s
A do-nothing Go program:
$ time ./tmp
real 0m0.002s
user 0m0.000s
sys 0m0.003s
I don't believe Go has any optimizations to not start its runtime if it isn't necessary, but when I added spawning a goroutine that immediately blocks on a channel read that will never come the numbers didn't change. That doesn't really time the runtime. Probably the program terminated before the goroutine was scheduled to run anything. It just makes it so there definitely wasn't an early exit because the compiler or the runtime "realized" it didn't need to start the runtime.
I'm sure the Go program is somewhat slower to start and end than C, and that we're running into the limits of how quickly processes can be spawned and other timing overhead which is obscuring the difference. However for practical purposes, "it starts up in less than the overhead for starting a process in the shell" is the same speed for most purposes.
Not even a "do nothing" Python program, no Python program at all:
$ time python3 -c 1
real 0m0.012s
user 0m0.008s
sys 0m0.004s
If you had a Go program that was slow to start up, it was your program, not Go. By contrast, Python, and the dynamic scripting languages in general, can be quite slow to start up, just in the reading and compiling of the code. (Even .pyc files, IIRC, take processing, just less processing than Python source code... it's still nowhere near "memory map it in and go" as it is for statically-compiled languages.)
What? Compared to Python they're like lightning. Typically milliseconds to the start of main() - admittedly they can be slowed down by init() nonsense and terrible generated protobuf code nonsense in deep dependency trees - but with a non-trivial Python program you can look forward to an order of magnitude more. There are techniques to help address that but (1) they're not idiomatic and (2) it still only mitigates it.
I suppose Go programs are slower than the equivalent thing in C or C++, but I'm not sure that's a very relevant comparison in most cases today (how many new things being written would choose those languages).
Well, they never made the jump to Python 3. But shipping 2.7 interpreters in 2024 was quite an achievement on its own. So their users already know this pain. And from my experience in academia, python 2.7 and java 8 will probably be used for another 20 years before the last machine running that stuff burns out.
Why are people still building systems on top of a language that continually undergoes fundamental changes nearly 40 years after release? Is this not the strongest indication that this language is not well designed, it is unstable, and encounters many issues that flat out don't exist in other high level languages?
What language that is actually used 40 years after release isn't undergoing big, fundamental changes?
Java? Nope, you're getting a fundamental change in Valhalla
C++? Nope, new language edition every few years with fundamental changes
C? C23 has a number of fairly fundamental changes, expect more in the next language revision
I think your sense of causality is backwards here. These languages are getting fundamental changes because they're being widely used. That is what motivates and drives the change. Languages with no users don't need to change.
I'm currently in a .NET shop so not an issue for me, makes me wonder if Python will eventually adopt the concept of LTS releases, this could have been avoided as an issue if it was part of a non-LTS release.
If all releases are LTS, then none are. Part of the point GP was making is that when some releases have a very short maintenance window, then changes that are terrible in them don't need to be reverted (since the maintenance window will close soon anyways).
It also serves as a soft signal that "hey this new feature, it might break things in unexpected ways even though it looks like it works!" so don't update until the next LTS which should encompass features that have been greenlit as stable, and have sat through a few releases.
Yeah it seems like a miss. I guess the thinking was that it wasn't developer-facing and just an internal optimization. But of course any change to garbage collection will change the memory and cpu dynamics of the process in a material way.
.NET seems to have regularly changed the garbage collector over the years and I do not remember any similar surprises in production. I wonder why they have had better experience?
I thought that by now dynamic garbage collection was a known quantity so that making changes, outside of out right bugs, is fairly safe and predictable?
One thing Microsoft does really well is eating its own dogfood and Microsoft feeds a ton of .Net dogs.
So any change to GC starts with massive .Net MSFT code base so they get extremely good telemetry back about any downsides and might be able to fix it in time.
There is almost no dog fooding on Windows development since version 8, Typescript team rather rewrite the compiler in Go, Azure has plenty of Go, Rust and Java projects alongside .NET.
Oh, they really don't dogfood Windows development any longer, regardless of the incentives.
I have my WinRT 8, UAP 8.1, UWP 10, Project Reunion, .NET Native, C++/CX, C++/WinRT, XAML Islands, XAML Direct, WinUI 2.0, WinUi 3.0, WinAppSDK and what not scars to prove how they aren't dog fooding any piece of it in any meaningful manner.
Heck they keep talking about C++ support in WinUI 3, as if the team hasn't left the project and is now playing with Rust instead.
They managed that plenty of early WinRT advocates became their hardest critics, while not believing anything else they put out, like now this Windows K2 project.
I like my programming language flame wars just as much as the next guy but Go is a really easy language to get started with, while also being very fast. It's not just luck
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
Basically complete disregard for the history of programming languages and learnt lessons.
Go fits well close to Oberon released in 1987, or Limbo in 1995, when exceptions and generics were still esoteric features.
Instead they had to reach out to Phil Wadler to help them, as he did previously with Java almost a decade earlier, panic/recover is clunky way to do exceptions, instead of doing enumerations like Pascal in 1976, it needs a a iota/const code pattern, hardcoded urls for source repos, if err all over the place like last century programming, many errors are plain strings, ah and nil interfaces what a great gotcha.
What? If you are talking web development, .Net is just about the same as Go. It's 100% Java OOP type writing but result is same, very performant API server.
Sure, Rust is completely different beast with different target system.
Actually there’s a change to dotnet 9 with how it handles the heap and GC which caused major issues for us.
I’ll confess the reason it hit us so hard is because the code quality was so low and wasteful on allocations that it didn’t hide the problem as well as previous versions.
I remember working on the Windows Update back-end at Microsoft around 2005, and we had a problem where it would freeze up periodically, and not surprisingly that turned out to be caused by GC. But we noticed it before shipping, and we just tweaked some GC parameters.
So I think it was not a big problem for .Net because it gave you enough control over GC, and because people tested their code before putting it in production.
All these issues were known in previous attempts for removing the GIL. But if Instagram/Meta want it, everyone stands to attention and finds out the obvious problems years later. Kind of like in geopolitics.
I hope Meta switches Instagram to PHP/Hack so they leave Python alone.
In the world of AI written code, Python just doesn’t make sense. Converted about 100k lines in the last few months to golang and the performance is life changing. Curious if we will see global Python adoption fall by 75% or more in the next few years.
With a similar amount of experience with both languages I found Go much easier to read. I've always been a bit miffed why Python is seen as easy to read for experienced developers. I get the syntax is good for short code or people with little experience but my experience is those readability benefits went away quickly with time or complexity.
Why are you miffed about it? I legitimately hate reading golang with passion and find python to be pretty intuitive, outside of the odd ambitious list comprehensions. I worked in a golang shop for several years, so it's not just an familiarity situation either.
We are just different. That's not something to be mad about.
In my opinion most interpreted languages today tend to produce very dense code. Fancy call chains and closures interleaving. If you look for a subtle bug those are hard to reason about, you have to know the details of a lot of different APIs.
Go is verbose partly for that reason, but a silly loop is a silly loop. The constraints are clear, you only have to do the logic.
Python is a garbage language. Dynamic types are a disaster for maintaining large codebases and we waste enormous amounts of compute running large systems with it.
No we should write one of the many modern programming languages that handle certain projects way better, including kotlin, go, or Java. The only things python is best in class at are scripting and as a harness for high performance c++ or fortran.
Any language that uses error codes instead of exceptions is a non-starter for me. Produces code that craps all over the happy path.
Python has a different problem: it is slow as f---. I did a micro benchmark comparison against 5 other languages in preparation for my python replacement language. Outside of dictionary lookups, it is 50-600 times slower than C depending on the workload.
Go, Rust etc are fine. They land at 1.25-3x slower than C. But I prefer the readability of python minus its dynamic nature.
nothing about the performance characteristics of python changed with AI so why would you use python over golang if performance is a requirement/bottleneck? Trying to understand the reasoning as to me golang and python are equally simple to write and understand.
Regardless of whether golang and python are actually equally simple, python certainly has the reputation of being easier to write and read than almost any other language. That is a big part of its popularity.
Python is not really simple though, the semantics are actually quite bonkers. It just has "simple"-looking syntax, but that only helps you for trivial programs where the bonkers semantics does not get in the way.
I think we'll eventually be generating machine code directly. But until then we should be using code that our team can actually read and understand. If you know go, then that works you, Not everyone does.
Doubt it. LLMs will always be more expensive per-token than compilers, and high level languages need fewer tokens than machine code. Also, type systems, warnings, overlap with natural language in names - those are very useful.
For personal projects, yes. For code going into production, you still need human code review, and that has to happen in a language that the humans you've hired are comfortable with. One day, we'll all be YOLOing vibe code straight into production, but that day is not today.
A couple of services looked like they had a memory leak. Memory was continuously increasing over time. Thanks to Python 3.14, we were able to use memray to understand what was going on. Those services were recreating HTTP clients (aiohttp) for every inbound request, and memory allocated by the downstream SSL lib was growing faster than it was being released.
We ended up rolling back to 3.13, which fixed the issue. I'll try again with 3.14.5.
The reference cycle httpx creates is kind of a worst-case scenario for the incremental GC issue. Both the generational (3.13 and older) and incremental GC are triggered by the net new "container" objects (objects that have references to others, like lists and not like ints and floats). The short summary is that you need to create more container objects before the incremental GC triggers. In the case of the httpx reference cycle, you have a relatively small number of container objects hanging on to a lot of memory, due the SSL context data (which is a big memory hog).
Reverting back to the generational GC was the wise thing to do, even though it's a bit scary to do in a bugfix release. The incremental GC works for most people but in the minority of cases it doesn't, it uses quite a lot more memory. I'm pretty sure with some additional tuning, the incremental GC would be fine too but it just didn't get that tuning. The generational GC has literal decades of real-world use (Guido merged my patch on Jun 2000, Tim Peters did a bunch of tuning after that to optimize it).
Unfortunately, you may be the wrong gender to contribute to Encode repositories like httpx:
> I've closed off access to issues and discussions.
> I don't want to continue allowing an online environment with such an absurdly skewed gender representation. I find it intensely unwelcoming, and it's not reflective of the type of working environments I value.
— https://github.com/encode/httpx/discussions/3784
Discussed on Hacker News here: https://news.ycombinator.com/item?id=47193563
A fork discussed here: https://news.ycombinator.com/item?id=47514603
It's annoying that somehow talking to S3 etc requires so much churn. We have been trying to cache session objects and the like but clearly are still missing something.
Chasing this down has also made me realize how little Python libs use `weakref`, and just will build up so many circular references. The other day I figured out Django request's session infrastructure creates a circular reference meaning that requests have to get GC'd to get cleaned up in CPython.
I have a suspicion that the 3.14 problems are heavily linked to "real" workloads being almost entirely filled with cyclical objects.
Is there a way to make all this much easier to debug and to prevent memory issues in the first place? Is the abstraction level not quite right?
We’ve decided to revert it in both 3.14 and 3.15, and go back to the generational GC from 3.13."
Sounds the right move for me
Lately, they seems to work with CRIU, various heuristics, multi-stage in-process bytecode compilation ..
Java is a mess, they are working hard to avoid fixing their issue (that nobody else have, so fixes are available)
Not sure what you mean by this, as this has nothing to do with GC, and Java has had a multi-tier optimising compiler for 15 years now.
> that nobody else have, so fixes are available
Go has much worse problems with GC than Java does these days, and nobody else is able to achieve similar performance in large programs with heavy workloads. So everyone else lives with less sophisticated compilers and memory management simply by accepting worse performance.
Compared to Python's, all of them are beyond perfect. And 99.9% of the time you don't even need to use anything but the default.
I somehow understand the situation less after reading this.
Is Python's GC bad, or are there cyclic reference issues? Is it possible to detect cyclic references perfectly? What does beyond perfect mean? If we have 7 and 0.1% of the time you need one of the 6 that is non-default, how do we choose? Is the understated version of "Compared to Python's, all of them are beyond perfect" "I think Java's are great"? If not, what about Python's impl makes it so lackluster to any of 7 of Java's?
Yes. The GCs in Java, .NET, V8, and Go do it.
> If we have 7 and 0.1% of the time you need one of the 6 that is non-default, how do we choose?
Java's GC are optimised for different workloads and environments, and when the choice matters, they're easy to choose among:
1. Parallel GC: Maximal throughput when latency doesn't matter (batch processing).
2. Serial GC: Very small machines.
3. ZGC: low latency (<<1ms maximal pause, i.e. effectively pauseless)
4. G1 (the current default): A balanced mix of throughput and latency.
These are all the standard GCs (the seven you mentioned include a GC similar to Go's that was removed years ago, an "no op" GC for benchmarking hidden behind a development flag, and alternative implementations by different companies to some of the ones above).
It's possible that either Serial or Parallel will be removed when G1 is able to fully replace them.
Now, why do users need options? Because Java runs most of the world's finance, manufacturing, shipping and logistics, telecommunication, travel, healthcare, retail, defence, and government. We're talking large, complex software that handles huge workloads, and the needs vary. What works well enough for a CLI dev tool or a simple website is often not good enough to handle the world's credit card transaction processing or mobile phone networks.
> If not, what about Python's impl makes it so lackluster to any of 7 of Java's?
Java's GCs are moving collectors, which offer advantages not just compared to Python's GC but to all memory management strategies. Memory management (even in C) imposes a CPU/RAM tradeoff. Moving collectors (used in Java, .NET, and V8) give you a knob for controlling the tradeoff, i.e. they're able to convert RAM to CPU (i.e. use RAM chips as a hardware accelerator) and vice-versa.
Unless you're being pedantic and including reference counting without cycle detection as GC, if your GC has cyclic reference issues, your GC is bad.
> Is it possible to detect cyclic references perfectly?
Yes? That's the entire point of tracing GC. You have some set of root objects that you start with (globals, objects on thread stacks, etc.) and then you mark every object that's reachable from them. Anything that's not reachable is garbage, even if there are cycles within them.
Both can be true. The first can even be wholly or partly due to the second.
On addition, the way it does it via RC causes fragmentation, poor locality for caches, and general slowness for mass allocations. And it's one-size-fits-all.
Java has a much larger selection to pick to finetune specific use cases, which each being far greater for that use case. And the default no-need-to-think one (G1 iirc), is already faster and better than Python's.
Memory allocator: tcmalloc, jemalloc, they are concerned with fetching (and releasing) pages of memory from the OS and allocating objects for the program
GC is only responsible for saying to the memory allocator "this object is no longer used"
(please stay focused on java)
No, you're missing the fact that the allocation of memory and the GC go hand in hand, because you need it so for optimizations. They are designed together to cooperate in modern runtimes.
PyPy doesn't have the support it needs and is stuck on 3.11.
Obviously it's not easy to move the whole language of a big codebase, but I feel a lot of this stuff (fiddling with GC, JITing, type hints, and I'm dubious about the free-threading stuff) tries to take Python somewhere it isn't really good at, and if that's what you want, you really want a different language.
Not to mention that there are differences in ecosystem, familiarity, and ergonomics that may make a team want to stick with Python.
“Just use Go” is not really actionable advice in most cases.
I'm sure the Go program is somewhat slower to start and end than C, and that we're running into the limits of how quickly processes can be spawned and other timing overhead which is obscuring the difference. However for practical purposes, "it starts up in less than the overhead for starting a process in the shell" is the same speed for most purposes.
Not even a "do nothing" Python program, no Python program at all:
If you had a Go program that was slow to start up, it was your program, not Go. By contrast, Python, and the dynamic scripting languages in general, can be quite slow to start up, just in the reading and compiling of the code. (Even .pyc files, IIRC, take processing, just less processing than Python source code... it's still nowhere near "memory map it in and go" as it is for statically-compiled languages.)I suppose Go programs are slower than the equivalent thing in C or C++, but I'm not sure that's a very relevant comparison in most cases today (how many new things being written would choose those languages).
People parrot to use OpenJDK without understanding it is mostly Oracle employees working on it.
And if you dislike Oracle, the other minor contributors are Red-Hat, IBM, SAP, Microsoft, Alibaba, Azul,... which for many HNers are the same.
jython went EOL.with python 2 going EOL.
It's predictable vs Rust, C#, F#, Elixir, Go, etc.?
Java? Nope, you're getting a fundamental change in Valhalla C++? Nope, new language edition every few years with fundamental changes C? C23 has a number of fairly fundamental changes, expect more in the next language revision
I think your sense of causality is backwards here. These languages are getting fundamental changes because they're being widely used. That is what motivates and drives the change. Languages with no users don't need to change.
https://en.wikipedia.org/wiki/Guido_van_Rossum
https://devguide.python.org/versions/
I thought that by now dynamic garbage collection was a known quantity so that making changes, outside of out right bugs, is fairly safe and predictable?
So any change to GC starts with massive .Net MSFT code base so they get extremely good telemetry back about any downsides and might be able to fix it in time.
There is almost no dog fooding on Windows development since version 8, Typescript team rather rewrite the compiler in Go, Azure has plenty of Go, Rust and Java projects alongside .NET.
Windows Development is not "We are not dogfooding", it's that incentives are misaligned with customer wants.
.Net team incentives are aligned with customer wants, provide a language that is highly performant and easy enough to write.
I have my WinRT 8, UAP 8.1, UWP 10, Project Reunion, .NET Native, C++/CX, C++/WinRT, XAML Islands, XAML Direct, WinUI 2.0, WinUi 3.0, WinAppSDK and what not scars to prove how they aren't dog fooding any piece of it in any meaningful manner.
Heck they keep talking about C++ support in WinUI 3, as if the team hasn't left the project and is now playing with Rust instead.
They managed that plenty of early WinRT advocates became their hardest critics, while not believing anything else they put out, like now this Windows K2 project.
Go is, essentially, nearly perfect at what it does - even if the language itself leaves much to be desired and would ideally be much safer.
Microsoft should up their game. They have a few research languages in development.
They've always been great with languages. Hopefully, they rise to the occassion.
Now we're stuck with it in anything CNCF related.
-- Rob Pike
Go fits well close to Oberon released in 1987, or Limbo in 1995, when exceptions and generics were still esoteric features.
Instead they had to reach out to Phil Wadler to help them, as he did previously with Java almost a decade earlier, panic/recover is clunky way to do exceptions, instead of doing enumerations like Pascal in 1976, it needs a a iota/const code pattern, hardcoded urls for source repos, if err all over the place like last century programming, many errors are plain strings, ah and nil interfaces what a great gotcha.
Sure, Rust is completely different beast with different target system.
I’ll confess the reason it hit us so hard is because the code quality was so low and wasteful on allocations that it didn’t hide the problem as well as previous versions.
So I think it was not a big problem for .Net because it gave you enough control over GC, and because people tested their code before putting it in production.
https://github.com/python/cpython/pull/117120
I hope Meta switches Instagram to PHP/Hack so they leave Python alone.
Free-threading actually uses its own, separate GC: https://labs.quansight.org/blog/free-threaded-gc-3-14
You are free to switch language but you still need to understand it.
We are just different. That's not something to be mad about.
Go is verbose partly for that reason, but a silly loop is a silly loop. The constraints are clear, you only have to do the logic.
Python has gradual type system.
> (Mocking) Yes, that's why we should go back to Y with even worse static analysis.
Sure
Python has a different problem: it is slow as f---. I did a micro benchmark comparison against 5 other languages in preparation for my python replacement language. Outside of dictionary lookups, it is 50-600 times slower than C depending on the workload.
Go, Rust etc are fine. They land at 1.25-3x slower than C. But I prefer the readability of python minus its dynamic nature.
Also, even if it looks like that to you, there are still people that write code with their own hands.