> Are our tools just worse now? Was early 2000s PHP actually good?
Not sure how rhetorical that was, but of course? PHP is a super efficient language that is tailor made to write dynamic web sites, unlike Go. The author mentions a couple of the features that made the original version easier to write and easier to maintain, they are made for the usecase, like $_GET.
And if something like a template engine is needed, like it will be if the project is a little bit bigger, then PHP supports that just fine.
> Max didn't need a request router, he just put his PHP file at the right place on the disk.
The tendency to abstract code away leads to complexity, while a real useful abstraction is about minimizing complexity. Here, the placement of PHP files makes stuff easier -> it's a good abstraction.
And that's why the original code is so much better.
> Max didn't need a request router, he just put his PHP file at the right place on the disk.
This also elides a bit of complexity; if I assume I already have the Nginx and gunicorn process then my Python web server isn’t much worse. (Back in the day, LAMP stack used Apache.)
I’ll for sure grant the templating and web serving language features though.
php is also jited nowadays.
currently I believe the main advantage is that hack is async, you can fire multiple SQL/http requests in parallel and cut some wall time.
This part really hits home. The first time I got to see a huge enterprise C project, I could not believe how simple the code was. Few to no tricks.
> To be perfectly honest, as a teenager I never thought Max was all that great at programming. I thought his style was overly-simplistic. I thought he just didn't know any better. But 15 years on, I now see that the simplicity that I dismissed as naive was actually what made his code great.
It’s a fun trip down memory lane, but the real story today, the sadder story, is that there is no longer any use for simple little programs like this that scratch an itch.
They’ve all been solved 100x over by founders who’ve been funded on this site. It used to make sense to have a directory or cgi-bin of helpful scripts. Now it only makes sense as a bit of nostalgia.
I miss the days when we had less, could get less done in a day… but felt more ownership over it. Those days are gone.
I would argue those days are coming back. Thanks to LLMs, I have probably 10x more "utility" scripts/programs than I had 2 years ago. Rather than bang my head against the wall for a couple hours to figure out how to (just barely) do something in Python to scratch an itch, I can get a nice, well documented, reusable and versatile tool in seconds. I'm less inclined than ever to go find some library or product that kinda does what I need it to do, and instead create a simple tool of my own that does exactly what I need it to.
Just please if you ever give that tool to someone else to use, understand, maintain, or fix, mention that it was created using an LLM. Maybe ask your LLM to mention itself in a comment near the top of the file.
There is still use for small niche programs. I host my own gif repository, a website for collecting vinyls and my own weather dashboard. I don’t expect anyone else to use these sites so they’re tailored to my user experience and it’s great.
> It’s a fun trip down memory lane, but the real story today, the sadder story, is that there is no longer any use for simple little programs like this that scratch an itch.
> They’ve all been solved 100x over by founders who’ve been funded on this site. It used to make sense to have a directory or cgi-bin of helpful scripts. Now it only makes sense as a bit of nostalgia.
Why does it make more sense to learn the syntax for someone else's helper scripts than to roll my own, if the latter is as easy or easier, and afterwards I know how to solve the problem myself?
Because time is finite and you probably set out to achieve something else which is now on hold. Nothing wrong with distractions but let's not glorify them :).
> Because time is finite and you probably set out to achieve something else which is now on hold. Nothing wrong with distractions but let's not glorify them :).
That's true, but it was also true before. To the extent that solving a problem to learn the details of solving it was ever worthwhile, which I think is and was quite a lot, I'd say it's still true now, even though there are lots of almost-but-not-quite solutions out there. That doesn't mean that you should solve all problems on your own, but I think you also shouldn't always use someone else's solution.
Reading this reminds me of the era which was envisioned will happen when I was in college (which was not long ago) - individuals and societies building their own independent custom stuff (both hardware and software) with the power of computers in everyone's hands. I am sure that is still happening in small pockets but most of the 'stuff' we use are built by large mindless corporates on which we have almost no control - and who prioritize profits over well-being of the employees and the community.
I don't know for sure what the problem was (I have my theories) and why could we not get there where most people build their own custom products.
This is something that's been on my mind a lot over the past few years. I think things were on that trajectory, but somewhere along the line it got out of wack.
User interfaces became more user-friendly [0], while developer experience - though simpler in many ways - also became more complex, to handle the complex demands of modern software while maintaining a smooth user experience. In isolation both of these things make sense. But taken together, it means that instead of developer and user experience converging into a middle place where tools are a bit easier to learn and interfaces a bit more involved, they've diverged further, to where all the cognitive load is placed on the development side and the user expects an entirely frictionless experience.
Specialization is at the core of our big interconnected society, so it's not a surprising outcome if you look at the past century or two of civilization. But at the same time I think there's something lost when roles become too segregated. In the same way homesteading has its own niche popularity, I believe there's a latent demand for digital homesteading too; we see its fringes in the slow rise of things like Neocities, the indie web, and open source software over the past few years.
Personally I think we just have yet to see the 'killer app' for digital homesteading, some sort of central pillar or set of principles to grow around. The (small) web is the closest we have at the moment, but it carries a lot of technical baggage with it, too much to be able to walk the fine line needed between approachability and flexibility.
Anyway, that's enough rambling for now. I'll save the rest for a blog post.
[0] user-friendly as in being able to use it without learning anything first; not that that's necessarily in the user's best interest
A bunch of useful insights in your reply. I really liked the insight of User Interfaces getting simpler while developer experience getting more complex. A counter argument that comes to mind is how violin has the most difficult UI - but a lot of people spend lot of time mastering it and enjoy creating music from it - often independently or in smaller bands. How can that happen with more people in development - maybe making developer experience more joyful is the way to go.
I'm not against specialization - but specialization can be done at a small community level too.
Oh the point wasn't that we can't do it now. The point is that not enough people choose to make their own custom software and the systemic reason behind it.
I don't think it's unrelated at all. I saw the same picture and just closed the tab right away. Why should I read this article, the whole thing might be written by an LLM.
Your comment reminds me of people complaining about how using emoji in communications/text has become normalized. Generating images with AI is pretty fun and seems like an appropriate thing to do for a personal blog. As in, this is the exact sort of place where it's most appropriate.
It's not like this person was ever going to pay someone to make a cartoon drawing so nobody lost their livelihood over it. Seems like a harmless visual identifier (that helps you remember if you read the article if you stumble across it again later).
Is it really such a bad thing when people use generative AI for fun or for their hobbies? This isn't the New York Times.
This happened to me too (almost subconsciously I might add). I'm actually not anti-AI at all, maybe a bit uninterested in AI-made art, since I don't fully see much use for it except for generating fun pictures of Golden Retriever dogs in silly situations, but this imitation-Ghibli art style is probably one of the least pleasing things to my eye that people love making. It's so round and without edge, it's colors are washed out in a very non-offensive way, and also it does not even look like the source material.
I wouldn't be so aggrieved by it, I think, if there wasn't that wave where everyone and their dog was making pictures in that style. Sorry, just a small rant tangentially related to the article, which is fine. :)
To make it a fair comparison, you also need to consider all the old-school Apache and PHP config files required to get that beautiful little script working. :) I still have battle scars.
Ahh lamp stacks… I remember there was a distro that had everything preconfigured for /var/www/ and hardened but for the life of me I can’t remember its name.
A lot of distros did and still do that. Getting an Apache instance up and running with PHP running as a CGI process was just a matter of installing the right packages on RedHat-derived distros going back to the early 2000s, for example.
They weren’t hardened at all. Installing lamp is one thing, ensuring it’s secure is another. Even RedHat would send a SA to your place to do that for you.
Fair enough. I wasn't getting the emphasis on hardening in your comment since the parent was just talking about the "battle scars" of configuration.
Re: hardening - I guess I deployed a lot of "insecure" LAMP-style boxes. My experience, mainly w/ Fedora Core and then CentOS, was to turn off all unnecessary services, apply security updates, limit inbound and outbound connectivity to only the bare minimum necessary w/ iptables, make sure only public key auth was configured for SSH, and make sure no default passwords or accounts were enabled. Depending on the application grubbing thru SELinux logs and adjusting labels might be necessary. I don't recall what tweaks there were on the default Apache or PHP configs, but I'm sure there were some (not allowing overrides thru .htaccess files in user-writeable directories, making sure PHP error messages weren't returned to clients, not allowing directory listings in directories without a default document, etc).
Everything else was in the application and whatever stupidity it had (world-writeable directories in shitty PHP apps, etc). That was always case-by-case.
It didn't strike me as a horribly difficult thing to be better-than-average in security posture. I'm sure I was missing a lot of obvious stuff, in retrospect, but I think I had the basics covered.
My point was there was a distro circa 1997-2003 or so that had all of that pre-baked. No having to mess with SELinux (or disabling it!), iptables, php.ini, apache's httpd.conf, or any of that other than putting your project into /var/www/ and doing a chown -R www on it.
I think Max's brain was not polluted with terror and showed trust in his tools.
Today many devs (and not prograamers)
are always suspicious, and terrified on the potential of something going wrong
because someone will point a finger
even if the error is harmless or improbable.
My experience is that many modern devs are incapable of assigning significance or probabilities,
they are usually not creative, fearful of "not using best practices", and do not take into consideration the anthropic aspect of software.
For years every external pentest of every perimeter of companies with old-school stuff like this has been finding these things and exploiting them and there are usually several webshells and weird stuff already on the server by the time they get to it. Very often the company forgot, or didn't know they had the thing.
The end state of running 15 year old unmaintained PHP is that you accumulate webshells on your server or it gets wiped. Or you just lose it or forget about it, or the server stops running because the same dev practices that got you the PHP means you probably don't bother with things like backups, config management, version control, IaC etc (I don't mean the author, who probably does care about those things, I just mean in general).
If these things are not a big deal (often it is not! and it's fun!) then absolutely go for it. In a non-work context I have no issues.
TBH I'm not 100% sure that either the PHP version _or_ the go versions of that code are free from RCE style problems. I think it depends on server config (modern php defaults are probs fine), binary versions (like an old exiftool would bone you), OS (windows path stuff can be surprising) and internal details about how the commands handle flags and paths. But as you point out, it probably doesn't matter.
> The reason the Go code is so much bigger is because it checks and (kind of) handles errors everywhere (?) they could occur
I’ve said before and will say again: error handling is most of what’s hard about programming (certainly most of what’s hard about distributed systems).
I keep looking for a programming language that makes error handling a central part of the design (rather than focusing on non-error control flow of various kinds), but honestly I don’t even know what would be better than the current options (Java/Python’s exceptions, or Go’s multiple returns, or Rust’s similar-seeming Result<T, E>). I know Linus likes using goto for errors (though I think it just kind of looks like try/catch in C) but I don’t know of much else.
It would need to be the case that code that doesn’t want to handle errors (like Max’s simple website) doesn’t have any error handling code, but it’s easy to add, and common patterns (e.g. “retry this inner operation N times, maybe with back off and jitter, and then fail this outer operation, either exiting the program or leaving unaffected parts running”) are easy to express
Have you seen Common Lisp’s condition system? It’s a step above exceptions, because one can signal a condition in low-level code, handle it in high-level code and then resume back at the lower level, or anywhere in between which has established a restart.
> It would need to be the case that code that doesn’t want to handle errors (like Max’s simple website) doesn’t have any error handling code, but it’s easy to add, and common patterns (e.g. “retry this inner operation N times, maybe with back off and jitter, and then fail this outer operation, either exiting the program or leaving unaffected parts running”) are easy to express
Lisp’s condition system can handle that! Here’s a dumb function which signals a continuable error when i ≤ 3:
(defun foo ()
(loop for i from 0
do (if (> i 3)
(return (format nil "good i: ~d" i))
(cerror "Keep going." "~d is too low" i))))
If one runs (foo) by hand then i starts at 0 and FOO signals an error; the debugger will include the option to continue, then i is 1 and FOO signals another error and one may choose to continue. That’s good for interactive use, but kind of a pain in a program. Fortunately, there are ways to retry, and to even ignore errors completely.
If one wishes to retry up to six times, one can bind a handler which invokes the CONTINUE restart:
(let ((j 0))
(handler-bind ((error #'(lambda (c)
(declare (ignore c))
;; only retry six times
(unless (> (incf j) 6)
(invoke-restart 'continue)))))
(foo)))
If one wants to ignore errors, then (ignore-errors (foo)) will run and handle the error by returning two values: NIL and the first error.
In terms of developer ergonomics, try/catch seems among the best we've come up with so far. We want to focus on the success case and leave the error case as a footnote.
That's the simplicity argument here too: sometimes we only want to write the success case, and are happy with platform defaults for error reporting. (Another thing that PHP handled out-of-the-box because its domain was so constrained; it had started with strong default HTML output for error conditions that's fairly readable and useful for debugging. It's also useful for disclosure leaks which is why the defaults and security best practices have shifted so much from the early days of PHP when even php_info() was by default turned on and easy to run to debug some random cgi-bin server you were assigned by the hosting company that week.)
Most of the problems with try/catch aren't even really problems with that form of error handling, but with the types of the errors themselves. In C++/Java/C#/others, when an error happens we want stack traces for debugging and stack walks are expensive and may require pulling symbols data from somewhere else and that can be expensive. But that's not actually inherent to the try/catch pattern. You can throw cheaper error types. (JS you don't have to throw the nice Error family that does stack traces, you could throw a cheap string, for instance. Python has some stack walking tricks that keep its Exceptions somewhat cheaper and a lot lazier, because Python expects try/except to be a common flow control idiom.)
We also know from Haskell do-notation and now async/await in so many languages (and some of Rust's syntax sugar, etc) that you can have the try/catch syntax sugar but still power it with Result/Either monads. You can have that cake and eat it, too. In JS, a Promise is a future Either<ResolvedType, RejectedType> but in an async/await function you are writing your interactions with it as "normal JS" try/catch. Both can and do coexist in the same language together, it's not really a "battle" between the two styles, the simple conceptual model of try/catch "footnotes" and the robust type system affordances of a Result/Either monad type.
(If there is a war, it's with Go doing a worst of both worlds and not using a true flat-mappable Monad for its return type. But then that would make try/catch easy syntax sugar to build on top of it, and that seems to be the big thing they don't want, for reasons that seem as much obstinance as anything to me.)
Abstracting error checking pays huge dividends, then. In PHP, if something crashes, it continues running and outputs nonsense (probably alright for the simplest of sites but you should turn this off if your thing has any kind of authentication) or it stops processing the page. PHP implicitly runs one process per request (not necessarily an OS process); everything is scoped to the request, and if the request fails it can just release every resource scoped to the request, and continue on. You could do the same in a CGI script by calling exit or abort. With any platform that handles all concurrent requests in a single process, you have to explicitly clean up a bunch of stuff, flush and close the response, and so on.
There's a similar effect in transactional databases - or transactional anything. If you run into any problem, you just abort the transaction and you don't have to care about individual cleanup steps.
On the second point,
make errors part of the domain,
and treat them as a kind of result outside the scope of the expected.
Be like jazz musician Miles Davis and instead of covering up mistakes,
make something wrong into something right. https://www.youtube.com/watch?v=FL4LxrN-iyw&t=183
This. The hardest part of solving a problem is to think about the problem and then come up with the right solution. We actually do the opposite: we write code and then think about it.
What a great read! And so many good insights! It almost made me want to convert a project to PHP - perhaps I will for a smaller project.
I love the simplicity and some of the great tools that PHP offers out of the box. I do believe that it only works in some cases. I use go because I need the error handling, the goroutines and the continuously running server to listen for kafka events. But I always always try to keep it simple, sometimes preferring a longer function than writing a useless abstraction that will only add more constraints. This is a great reminder to double my efforts when it comes to KISS!
The gist of the article is a fun thought experiment.
Why count lines of code? Error handling is nothing to sniff at, especially in prod. Imagebin had a small handful of known users. Open it up to the world and most the error handling in Go comes handy.
For PHP, quite a bit was left on the shoulders of the HTTP server (eg. routes). The final result of Go is a binary which includes the server. The comparison is not fully fair, unless I'm missing something.
Lines of Code has always been a tertiary indicator at best. It's supposed to only be used as a (very) rough indicator when you're trying to figure out the overall complexity of a project. As in, "there's 50 million lines of code in Windows."
Knowing a figure like that, you can reason that it's too big for a single developer. Therefore, you'll likely need at least two; and maybe a few thousand marketing people to sell it.
It’s certainly useful, but without looking, what is the difference between these two methods in the standard library:
array_sort
sortArray
Even if you can answer that off the top of your head, consider how ridiculous it is that you needed to memorize that at some point. This is not the only example of such a thing a PHP dev needed to remember to be effective, either.
Any programming language can be wielded in a simple way. Perl, for example, is superior to PHP in every way that is important to me.
Go is as well, even though it’s slightly more verbose than PHP for the authors imagebin tool.
We don’t do things simply because we’ve all been taught that complexity is cool and that using new tools is better than using old tools and so on.
My employer creates pods in Kubernetes to run command line programs on a fixed schedule that could (FAR MORE SIMPLY) run in a cronjob on a utility server somewhere. And that cronjob could email us all when there is a problem. But instead I have to remember to regularly open a bookmark to an opensearch host with a stupidly complex 75-character hostname somewhere, find the index for the pod, search through all the logs for errors, and if I find any, I need to the dig further to get any useful contextual information about the failure … or I could simply read an email that cron automatically delivered directly to my inbox. We stumble over ourselves to shove more things like that into kubernetes every day, and it drives me nuts. This entire industry has lost its goddamned mind.
Ok now I want to know. Does Max php code have security issues? Because especially in early straightforward PHP, those were all over the place. I vaguely remember PHP3 just injected query variables into your variables? But as $_GET is mentioned, this is probably at least not the case...
Both versions have security issues if you're sufficiently paranoid, because they shell out to exiftool on untrusted input files without any sandboxing. Exiftool has had RCE flaws in the past, and will likely have them again.
Perhaps you'd like the original story he's referencing: https://foldoc.org/The%20Story%20of%20Mel. (Actually, he's referencing the free verse version, but I don't like it as much, because it doesn't mention "most pessimum")
No they are not, which makes the case for breaking up applications whenever possible. Some thinks that means micro services, but that's not my point.
The example with the image sharing is pretty good, because it only needs to share images. In, shall we say more commercial settings, it would grow to handle meta data, scaling, comments, video sharing, account management and everything in between. When that happens Max's approach breaks down.
If you keep your systems as "image sharing", "comments" and "blog" and just string them together via convention or simply hard coding links, you can keep the simple solutions. This is at the cost if integration, but for many use that's perfectly fine.
I think that's the point. Max did things in the stupidest way that could possibly work, and it did work, and was simpler than the "smart" way, so was he less of a "real" programmer than Mel?
I think for a kid, Max's code was great but ultimately you do need to learn to think about things like error handling, especially if your code is intended to go into "production" (i.e., someone besides yourself will use/host it).
> You might think that Max's practices make for a maintenance nightmare. But I've been "maintaining" it for the last 15 years and I haven't found it to be a nightmare.
c'mon, you're talking about 200 LoC here. anything except BrainFuck would be maintainable at this scale.
have you ever had to fix a non-trivial third party WordPress plugin? the whole API used to be a dumpster fire of global state and magic functions. i dont know what it is now, but 15 years ago it was a total nightmare.
I think of "straight-line code" as a distinct sort of code. It's the sort of code that does a thing, then does the next thing, then does the next thing, and if anything fails it basically just stops and yields some kind of error because there's nothing else to do. Programmers feel like they ought to do something about it, like this is bad, but I think there's actually great value in matching the code to the task. Straight-line code is not necessarily improved by some sort of heavyweight "command" pattern implementation that abstracts it into steps, or bouncing around a dozen functions, or through many objects in some other pattern. There's a time and a place for that too; for instance, if these must be configured that may be superior. But a lot of times, if you have a straight-line task, straight-line code is truly the best solution. You have to make sure it doesn't become hairy, there are some traps, but there's also a lot of traps in a lot of the supposed "fixes", many of them that will actually bite you worse.
For many years now I've been banging on the drum that if you've been living solely in the dynamic scripting language world for over a decade, you might want to look back at static languages to put one in your tool belt. When the dynamic scripting languages first came out, they would routinely be estimated at using 1/10th the lines of static languages, and at the time I would have called that a pretty good estimate. However, since then, the gap has closed. In 1998, replacing a 233-line PHP script with a merely 305-line static-code replacement would have been unthinkable. And that's Go, with all its inline error-handling; an exception-based, modern static language might have been able to effectively match the PHP! Post this in the late 90s and people are going to be telling you how amazing it was the static code didn't take over 2000 lines. This doesn't represent our tools falling behind... this represents a staggering advance! And also the Go code is going to likely be faster. Probably not in a relevant way to this user, but at scale it would be highly relevant.
A final observation is that in the early PHP era, everything worked that way. Everything functioned by being a file that represented a program on the disk corresponding to that specific path. If you want to get fancy you had a path like "/cgi-bin/my.cgi/fake/path/here" and had your "my.cgi" receive the remainder of the path as a parameter, and that was a big deal. It took the web world more-or-less a decade to get over the idea that a URL ought to literally and physically correspond to something on the disk. We didn't get rid of that because we all hate fun and convenience. We got rid of that because it produces a lot of big problems at even a medium scale and it's not a good way to structure things in general. It's not something to mourn for, it's something we've had better ways of doing now for so long that people can forget why they're the better way.
Early 2000s PHP was a DSL for very simple web apps. So it's no surprise it excels at that.
People soon found out that it was not very good at complex web apps, though.
These days, there's almost no demand for very simple web apps, partially because common use cases are covered by SaaS providers, and those with a need and the money for custom web apps have seen all the fancy stuff that's possible and want it.
So it's no surprise that today's languages and frameworks are more concerned with making complex web apps manageable, and don't optimize much (or at all) for the "very simple" case.
> These days, there's almost no demand for very simple web apps, partially because common use cases are covered by SaaS providers, and those with a need and the money for custom web apps have seen all the fancy stuff that's possible and want it.
I dunno about that.
In 2000, one needed a cluster of backends to handle, say, a webapp built for 5000 concurrent requests.
In 2025, a single monolith running on a single VM, using a single DB on another instance can vertically scale to handle 100k concurrent users. Put a load balancer in front of 10 instances of that monolith and use RO DB followers for RO queries, and you can easily handle 10x that load.
> So it's no surprise that today's languages and frameworks are more concerned with making complex web apps manageable, and don't optimize much (or at all) for the "very simple" case.
Maybe the goal is to make complex web apps manageable, but in practice what I see are even very simply webapps being mad with those frameworks.
I disagree. I would say most of the migration from PHP was due to the appeal of one language for frontend and backend, and fashion/hype. PHP is still very usable for server-side rendering and APIs. You say "very simple" as if you can't have complex systems with PHP.
I see the current state of web development as a spiral of complexity with a lot of performance pitfalls. Over-engineering seems to be the default.
A couple of hundred lines of code is going always to be easy to maintain unless it's purposely written in an obfuscated and confusing style.
A project with only two maintainers in its lifetime isn't going to be subject to the kind of style meanderings that muck up a codebase that's gone through dozens of maintainers over its lifetime.
A couple of thousand lines needs some organization,
one function of two thousand lines is impenetrable.
I suppose everybody knows this is a riff on "The Story of Mel, a Real Programmer" (1983), but I'm posting the link to the classic story here just in case: https://users.cs.utah.edu/~elb/folklore/mel.html
(Actually, the reworked story in free verse style, which is its most popular form)
TFA is cute but it kinda misses the point, because the original Mel didn't write code that was simple or easy to understand. It was simple to him, and arguably there was some elegance to it once you understood it, but unlike the PHP from the updated story, Mel's code was machine code, really hard to understand or modify, and the design was all in his mind.
> Are our tools just worse now? Was early 2000s PHP actually good?
Not sure how rhetorical that was, but of course? PHP is a super efficient language that is tailor made to write dynamic web sites, unlike Go. The author mentions a couple of the features that made the original version easier to write and easier to maintain, they are made for the usecase, like $_GET.
And if something like a template engine is needed, like it will be if the project is a little bit bigger, then PHP supports that just fine.
> Max didn't need a request router, he just put his PHP file at the right place on the disk.
The tendency to abstract code away leads to complexity, while a real useful abstraction is about minimizing complexity. Here, the placement of PHP files makes stuff easier -> it's a good abstraction.
And that's why the original code is so much better.
This also elides a bit of complexity; if I assume I already have the Nginx and gunicorn process then my Python web server isn’t much worse. (Back in the day, LAMP stack used Apache.)
I’ll for sure grant the templating and web serving language features though.
They’ve all been solved 100x over by founders who’ve been funded on this site. It used to make sense to have a directory or cgi-bin of helpful scripts. Now it only makes sense as a bit of nostalgia.
I miss the days when we had less, could get less done in a day… but felt more ownership over it. Those days are gone.
> They’ve all been solved 100x over by founders who’ve been funded on this site. It used to make sense to have a directory or cgi-bin of helpful scripts. Now it only makes sense as a bit of nostalgia.
Why does it make more sense to learn the syntax for someone else's helper scripts than to roll my own, if the latter is as easy or easier, and afterwards I know how to solve the problem myself?
That's true, but it was also true before. To the extent that solving a problem to learn the details of solving it was ever worthwhile, which I think is and was quite a lot, I'd say it's still true now, even though there are lots of almost-but-not-quite solutions out there. That doesn't mean that you should solve all problems on your own, but I think you also shouldn't always use someone else's solution.
But they're personal itches, not productizable itches. The joy is still there, though.
I don't know for sure what the problem was (I have my theories) and why could we not get there where most people build their own custom products.
User interfaces became more user-friendly [0], while developer experience - though simpler in many ways - also became more complex, to handle the complex demands of modern software while maintaining a smooth user experience. In isolation both of these things make sense. But taken together, it means that instead of developer and user experience converging into a middle place where tools are a bit easier to learn and interfaces a bit more involved, they've diverged further, to where all the cognitive load is placed on the development side and the user expects an entirely frictionless experience.
Specialization is at the core of our big interconnected society, so it's not a surprising outcome if you look at the past century or two of civilization. But at the same time I think there's something lost when roles become too segregated. In the same way homesteading has its own niche popularity, I believe there's a latent demand for digital homesteading too; we see its fringes in the slow rise of things like Neocities, the indie web, and open source software over the past few years.
Personally I think we just have yet to see the 'killer app' for digital homesteading, some sort of central pillar or set of principles to grow around. The (small) web is the closest we have at the moment, but it carries a lot of technical baggage with it, too much to be able to walk the fine line needed between approachability and flexibility.
Anyway, that's enough rambling for now. I'll save the rest for a blog post.
[0] user-friendly as in being able to use it without learning anything first; not that that's necessarily in the user's best interest
I certainly consider it a good idea, now that it has come to mind.
and it will work very well.
It's not like this person was ever going to pay someone to make a cartoon drawing so nobody lost their livelihood over it. Seems like a harmless visual identifier (that helps you remember if you read the article if you stumble across it again later).
Is it really such a bad thing when people use generative AI for fun or for their hobbies? This isn't the New York Times.
When the project becomes more complex, things change for the worse.
Also, you need to protect modules not only from errors, but from the other programmers in your team.
Re: hardening - I guess I deployed a lot of "insecure" LAMP-style boxes. My experience, mainly w/ Fedora Core and then CentOS, was to turn off all unnecessary services, apply security updates, limit inbound and outbound connectivity to only the bare minimum necessary w/ iptables, make sure only public key auth was configured for SSH, and make sure no default passwords or accounts were enabled. Depending on the application grubbing thru SELinux logs and adjusting labels might be necessary. I don't recall what tweaks there were on the default Apache or PHP configs, but I'm sure there were some (not allowing overrides thru .htaccess files in user-writeable directories, making sure PHP error messages weren't returned to clients, not allowing directory listings in directories without a default document, etc).
Everything else was in the application and whatever stupidity it had (world-writeable directories in shitty PHP apps, etc). That was always case-by-case.
It didn't strike me as a horribly difficult thing to be better-than-average in security posture. I'm sure I was missing a lot of obvious stuff, in retrospect, but I think I had the basics covered.
Today many devs (and not prograamers)
are always suspicious, and terrified on the potential of something going wrong because someone will point a finger
even if the error is harmless or improbable.
My experience is that many modern devs are incapable of assigning significance or probabilities, they are usually not creative, fearful of "not using best practices", and do not take into consideration the anthropic aspect of software.
My 2 cents
The end state of running 15 year old unmaintained PHP is that you accumulate webshells on your server or it gets wiped. Or you just lose it or forget about it, or the server stops running because the same dev practices that got you the PHP means you probably don't bother with things like backups, config management, version control, IaC etc (I don't mean the author, who probably does care about those things, I just mean in general).
If these things are not a big deal (often it is not! and it's fun!) then absolutely go for it. In a non-work context I have no issues.
TBH I'm not 100% sure that either the PHP version _or_ the go versions of that code are free from RCE style problems. I think it depends on server config (modern php defaults are probs fine), binary versions (like an old exiftool would bone you), OS (windows path stuff can be surprising) and internal details about how the commands handle flags and paths. But as you point out, it probably doesn't matter.
Am I just doing the meme? :)
There's your explanation why it could be so simple
I’ve said before and will say again: error handling is most of what’s hard about programming (certainly most of what’s hard about distributed systems).
I keep looking for a programming language that makes error handling a central part of the design (rather than focusing on non-error control flow of various kinds), but honestly I don’t even know what would be better than the current options (Java/Python’s exceptions, or Go’s multiple returns, or Rust’s similar-seeming Result<T, E>). I know Linus likes using goto for errors (though I think it just kind of looks like try/catch in C) but I don’t know of much else.
It would need to be the case that code that doesn’t want to handle errors (like Max’s simple website) doesn’t have any error handling code, but it’s easy to add, and common patterns (e.g. “retry this inner operation N times, maybe with back off and jitter, and then fail this outer operation, either exiting the program or leaving unaffected parts running”) are easy to express
https://gigamonkeys.com/book/beyond-exception-handling-condi... is a nice introduction; https://news.ycombinator.com/item?id=24867548 points to a great book about it. I believe that Smalltalk ended up using a similar system, too.
> It would need to be the case that code that doesn’t want to handle errors (like Max’s simple website) doesn’t have any error handling code, but it’s easy to add, and common patterns (e.g. “retry this inner operation N times, maybe with back off and jitter, and then fail this outer operation, either exiting the program or leaving unaffected parts running”) are easy to express
Lisp’s condition system can handle that! Here’s a dumb function which signals a continuable error when i ≤ 3:
If one runs (foo) by hand then i starts at 0 and FOO signals an error; the debugger will include the option to continue, then i is 1 and FOO signals another error and one may choose to continue. That’s good for interactive use, but kind of a pain in a program. Fortunately, there are ways to retry, and to even ignore errors completely.If one wishes to retry up to six times, one can bind a handler which invokes the CONTINUE restart:
If one wants to ignore errors, then (ignore-errors (foo)) will run and handle the error by returning two values: NIL and the first error.That's the simplicity argument here too: sometimes we only want to write the success case, and are happy with platform defaults for error reporting. (Another thing that PHP handled out-of-the-box because its domain was so constrained; it had started with strong default HTML output for error conditions that's fairly readable and useful for debugging. It's also useful for disclosure leaks which is why the defaults and security best practices have shifted so much from the early days of PHP when even php_info() was by default turned on and easy to run to debug some random cgi-bin server you were assigned by the hosting company that week.)
Most of the problems with try/catch aren't even really problems with that form of error handling, but with the types of the errors themselves. In C++/Java/C#/others, when an error happens we want stack traces for debugging and stack walks are expensive and may require pulling symbols data from somewhere else and that can be expensive. But that's not actually inherent to the try/catch pattern. You can throw cheaper error types. (JS you don't have to throw the nice Error family that does stack traces, you could throw a cheap string, for instance. Python has some stack walking tricks that keep its Exceptions somewhat cheaper and a lot lazier, because Python expects try/except to be a common flow control idiom.)
We also know from Haskell do-notation and now async/await in so many languages (and some of Rust's syntax sugar, etc) that you can have the try/catch syntax sugar but still power it with Result/Either monads. You can have that cake and eat it, too. In JS, a Promise is a future Either<ResolvedType, RejectedType> but in an async/await function you are writing your interactions with it as "normal JS" try/catch. Both can and do coexist in the same language together, it's not really a "battle" between the two styles, the simple conceptual model of try/catch "footnotes" and the robust type system affordances of a Result/Either monad type.
(If there is a war, it's with Go doing a worst of both worlds and not using a true flat-mappable Monad for its return type. But then that would make try/catch easy syntax sugar to build on top of it, and that seems to be the big thing they don't want, for reasons that seem as much obstinance as anything to me.)
There's a similar effect in transactional databases - or transactional anything. If you run into any problem, you just abort the transaction and you don't have to care about individual cleanup steps.
On the second point, make errors part of the domain, and treat them as a kind of result outside the scope of the expected. Be like jazz musician Miles Davis and instead of covering up mistakes, make something wrong into something right. https://www.youtube.com/watch?v=FL4LxrN-iyw&t=183
This. The hardest part of solving a problem is to think about the problem and then come up with the right solution. We actually do the opposite: we write code and then think about it.
This is how "features" get added to most Microsoft products these days :thumbsup:
I love the simplicity and some of the great tools that PHP offers out of the box. I do believe that it only works in some cases. I use go because I need the error handling, the goroutines and the continuously running server to listen for kafka events. But I always always try to keep it simple, sometimes preferring a longer function than writing a useless abstraction that will only add more constraints. This is a great reminder to double my efforts when it comes to KISS!
Why count lines of code? Error handling is nothing to sniff at, especially in prod. Imagebin had a small handful of known users. Open it up to the world and most the error handling in Go comes handy.
For PHP, quite a bit was left on the shoulders of the HTTP server (eg. routes). The final result of Go is a binary which includes the server. The comparison is not fully fair, unless I'm missing something.
Knowing a figure like that, you can reason that it's too big for a single developer. Therefore, you'll likely need at least two; and maybe a few thousand marketing people to sell it.
array_sort
sortArray
Even if you can answer that off the top of your head, consider how ridiculous it is that you needed to memorize that at some point. This is not the only example of such a thing a PHP dev needed to remember to be effective, either.
Any programming language can be wielded in a simple way. Perl, for example, is superior to PHP in every way that is important to me.
Go is as well, even though it’s slightly more verbose than PHP for the authors imagebin tool.
We don’t do things simply because we’ve all been taught that complexity is cool and that using new tools is better than using old tools and so on.
My employer creates pods in Kubernetes to run command line programs on a fixed schedule that could (FAR MORE SIMPLY) run in a cronjob on a utility server somewhere. And that cronjob could email us all when there is a problem. But instead I have to remember to regularly open a bookmark to an opensearch host with a stupidly complex 75-character hostname somewhere, find the index for the pod, search through all the logs for errors, and if I find any, I need to the dig further to get any useful contextual information about the failure … or I could simply read an email that cron automatically delivered directly to my inbox. We stumble over ourselves to shove more things like that into kubernetes every day, and it drives me nuts. This entire industry has lost its goddamned mind.
Yep, stay-with-the-fad pressures mean people need to farm experience using those fads.
It won't change until the industry is okay with slowing down
But for a service with 1 user, it's fine.
Ok, i actually cried at this part.
Please get it running at least PHP 8.3. Running PHP 5 or 7 on servers available to the public is negligent in 2025.
Lots of software projects don't have this luxury, sadly.
The example with the image sharing is pretty good, because it only needs to share images. In, shall we say more commercial settings, it would grow to handle meta data, scaling, comments, video sharing, account management and everything in between. When that happens Max's approach breaks down.
If you keep your systems as "image sharing", "comments" and "blog" and just string them together via convention or simply hard coding links, you can keep the simple solutions. This is at the cost if integration, but for many use that's perfectly fine.
Edit: Oh, that Mel.
I think for a kid, Max's code was great but ultimately you do need to learn to think about things like error handling, especially if your code is intended to go into "production" (i.e., someone besides yourself will use/host it).
c'mon, you're talking about 200 LoC here. anything except BrainFuck would be maintainable at this scale.
have you ever had to fix a non-trivial third party WordPress plugin? the whole API used to be a dumpster fire of global state and magic functions. i dont know what it is now, but 15 years ago it was a total nightmare.
I think of "straight-line code" as a distinct sort of code. It's the sort of code that does a thing, then does the next thing, then does the next thing, and if anything fails it basically just stops and yields some kind of error because there's nothing else to do. Programmers feel like they ought to do something about it, like this is bad, but I think there's actually great value in matching the code to the task. Straight-line code is not necessarily improved by some sort of heavyweight "command" pattern implementation that abstracts it into steps, or bouncing around a dozen functions, or through many objects in some other pattern. There's a time and a place for that too; for instance, if these must be configured that may be superior. But a lot of times, if you have a straight-line task, straight-line code is truly the best solution. You have to make sure it doesn't become hairy, there are some traps, but there's also a lot of traps in a lot of the supposed "fixes", many of them that will actually bite you worse.
For many years now I've been banging on the drum that if you've been living solely in the dynamic scripting language world for over a decade, you might want to look back at static languages to put one in your tool belt. When the dynamic scripting languages first came out, they would routinely be estimated at using 1/10th the lines of static languages, and at the time I would have called that a pretty good estimate. However, since then, the gap has closed. In 1998, replacing a 233-line PHP script with a merely 305-line static-code replacement would have been unthinkable. And that's Go, with all its inline error-handling; an exception-based, modern static language might have been able to effectively match the PHP! Post this in the late 90s and people are going to be telling you how amazing it was the static code didn't take over 2000 lines. This doesn't represent our tools falling behind... this represents a staggering advance! And also the Go code is going to likely be faster. Probably not in a relevant way to this user, but at scale it would be highly relevant.
A final observation is that in the early PHP era, everything worked that way. Everything functioned by being a file that represented a program on the disk corresponding to that specific path. If you want to get fancy you had a path like "/cgi-bin/my.cgi/fake/path/here" and had your "my.cgi" receive the remainder of the path as a parameter, and that was a big deal. It took the web world more-or-less a decade to get over the idea that a URL ought to literally and physically correspond to something on the disk. We didn't get rid of that because we all hate fun and convenience. We got rid of that because it produces a lot of big problems at even a medium scale and it's not a good way to structure things in general. It's not something to mourn for, it's something we've had better ways of doing now for so long that people can forget why they're the better way.
People soon found out that it was not very good at complex web apps, though.
These days, there's almost no demand for very simple web apps, partially because common use cases are covered by SaaS providers, and those with a need and the money for custom web apps have seen all the fancy stuff that's possible and want it.
So it's no surprise that today's languages and frameworks are more concerned with making complex web apps manageable, and don't optimize much (or at all) for the "very simple" case.
I dunno about that.
In 2000, one needed a cluster of backends to handle, say, a webapp built for 5000 concurrent requests.
In 2025, a single monolith running on a single VM, using a single DB on another instance can vertically scale to handle 100k concurrent users. Put a load balancer in front of 10 instances of that monolith and use RO DB followers for RO queries, and you can easily handle 10x that load.
> So it's no surprise that today's languages and frameworks are more concerned with making complex web apps manageable, and don't optimize much (or at all) for the "very simple" case.
Maybe the goal is to make complex web apps manageable, but in practice what I see are even very simply webapps being mad with those frameworks.
I see the current state of web development as a spiral of complexity with a lot of performance pitfalls. Over-engineering seems to be the default.
Definitely not. PHP lost far more market share to Java,C# and Ruby on Rails than to node.js
> PHP is still very usable for server-side rendering and APIs.
Not "is still", but "has become". It has changed a lot since the PHP 3 days.
> You say "very simple" as if you can't have complex systems with PHP.
With early 2000s PHP, you really couldn't, not without suffering constantly from the language's inadequacies.
> I see the current state of web development as a spiral of complexity with a lot of performance pitfalls. Over-engineering seems to be the default.
I don't disagree, but that seems to happen most of all in the frontend space.
They eventually made it fit for purpose with Laravel ;-)
Also, the image kinda looks like me. It's not me though. I don't think.
(Actually, the reworked story in free verse style, which is its most popular form)
TFA is cute but it kinda misses the point, because the original Mel didn't write code that was simple or easy to understand. It was simple to him, and arguably there was some elegance to it once you understood it, but unlike the PHP from the updated story, Mel's code was machine code, really hard to understand or modify, and the design was all in his mind.
Mel would also scoff at PHP.
Simple is robust.