Just saw this pop up — full public PoC for CVE-2026-42945 ("NGINX Rift"), a heap buffer overflow in NGINX's ngx_http_rewrite_module that's been there since 0.6.27 (2008).
It triggers on a very common pattern: a `rewrite` directive (with an unnamed capture like $1/$2 and a `?` in the replacement string) followed by `set`, `if`, or another `rewrite`. The root cause is a classic two-pass script engine bug (length calculation vs. actual copy pass with ngx_escape_uri).
The PoC turns it into unauthenticated RCE using cross-request heap feng shui + pool cleanup pointer corruption. Tested with a simple Docker setup.
Affects basically any NGINX doing URL rewriting in front of apps/PHP/etc. Workaround mentioned is switching to named captures.
The discovery angle is also interesting — it was found autonomously by depthfirst's security analysis tool after one-click onboarding of the NGINX source.
Anyone running NGINX in production using rewrite rules? How are you checking your configs? Thoughts on the exploit chain or the AI-assisted finding process?
The exploit they chose assumes ASLR is disabled for simplicity's sake, but if you read the full writeup they say they could've used the vulnerability to map memory layout. It's nice to have ASLR but some types of vulnerabilities can be used to bypass it.
Wow, coming from the webdev world. It is so funny seeing NGINX, one of the widest used web servers in the world, on version 1.x. React is on version 19. Really shows how differently new vs. old software is designed and built, and not necessarily in a good way.
I chalk that up more to different versioning schemes rather than how much work is being done. If nginx changed whole numbers like react did, I bet it would be even higher.
anyone can choose any version string convention they want for their project. Comparing two different pieces of software by their version string doesn't make sense.
This one's pretty bad but there are some preconditions.
Requires a "rewrite" directive with a questionmark in the replacement string, and then a subsequent "set" directive that references a regex capture group (e.g. set $var $1).
Not the person you asked but I am not aware of any that disable ASLR by default, though most default to 1 which only enables ASLR for applications compiled to enable it vs 2 forcing it on or 3 on some distributions that use a hardened kernel. Rather than trusting any assumptions I prefer to run checksec [1] on every OS I touch. It's an old script but works just as well today as it did long ago. One may find that some applications are missing some basic hardening compile time options. The script is not an exhaustive test of all modern hardening options. Example of ASLR being forced on:
This invocation will list the status of RELRO, Stack Canary, NX/PaX, PIE of all running daemons. My CachyOS installation for example is missing Stack Canaries for all daemons.
checksec.sh --fortify-proc 732
* Process name (PID) : sshd (732)
* FORTIFY_SOURCE support available (libc) : Yes
* Binary compiled with FORTIFY_SOURCE support: N
Some additional compile time hardening options [2] and discussion [3]. Even Rust apparently has some compile time security related options.
Apache still runs about 23-28% of websites (with some measurements suggesting it is pretty close to equal with nginx). PHP is still in use by 70-80% of websites (numbers vary depending on where you look).
You make it sound like both pieces of tech are irrelevant. Nothing could be further from the truth.
some quick googled examples (like I said other sites' numbers vary, but you get the general idea):
Worker processes are forked from the master, which means they receive the same memory layout. You get unlimited crashes against the worker. There's probably a way to exploit that to get a read oracle. At the very least this is a reliable denial of service.
I doubt it: aslr is not as easy to break on modern Linux as everyone in this thread wants to pretend it is. And anybody who actually cares so much about security that a compromised web frontend is the end of the world should be doing other things which would additionally mitigate this...
I know they claimed they can bypass it: if that's true, they should publish it. The forking nature of nginx is uniquely bizarre and vulnerable, and I strongly suspect that's the only way they're pulling it off. I feel like that's the interesting thing here, not the buffer overrun.
Apache used forked processes; I don't think that's unique or a particular issue. NGINX uses async io to handle requests, which is a substantial upgrade from Apache; that's why it's performant.
Memory corruption vulnerabilities are possible whenever a language is used that performs copies of data across buffers without in-language guards.
This vulnerability does not require knowledge of the memory layout to generate worker crashes against a system with vulnerable configurations.
The vulnerability is not the end of the world. System administrators will upgrade nginx with the security patch when it's released across most distribution paths (right now it's available only on unstable Debian for example). In the meantime sysadmins will likely remove the vulnerable directives from nginx configs.
> Apache used forked processes; I don't think that's unique or a particular issue.
Of course it is... in a typical threaded daemon, the threads have randomized stack addresses. Exactly as you observed, you get unlimited tries because nginx dutifully restarts the worker process with the same literal stack address every time it segfaults. I'm willing to bet the ASLR break they claim to have relies on that, but I'd be happy to be proven wrong if they publish it :)
I mean... you're missing the forest for the trees, but yes I meant "address space" generally not "stack" specifically. The nginx threads are forked, it would not be that terribly complex to set up a heap with a new random address base in each worker (the only real complexity is dealing with heap allocations which happened before fork()). But the stack matters too, generally moreso.
I find it very unlikely that anyone using nginx does NOT use `set` at least.
Most nginx use cases are to end tls and then pass the request to node/php/go/etc. So, I bet you have at least one set with attacker controller data on a line like 'proxy_set_header X-Host $host;'
edit: nvm. aparently named captures are not affect. Unless you have a $1 somewhere, it should be fine.
Just as a PSA, I found that "nginx -v" was not detailed about the version sufficient to check, but "apt list nginx" gave the full version number that was checkable, and indeed the 24.04 version of this morning (1.24.0-2ubuntu7.8) is patched.
As a security person it is tiring to see so many people here either directly claim or at least allude to the claim that this is somehow much less scary because the _published_ exploit does not bypass ASLR. The writeup claims there is a way to reliably bypass ASLR with this attack. And that is a good default assumption I would be willing to believe without evidence.
ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
It is straight up wrong to say "if you have ASLR enabled, you're not at any risk from this" and saying this is extremely harmful for anyone that trusts claims like that.
This wrong belief that you shouldn't care about security vulnerabilities because mitigations may make exploitation more difficult has already caused so much harm in the past. Be glad that modern mitigations exist, but patch your stuff asap. If you are a vendor, do not treat vulnerability reports as invalid because the researcher has not provided an ASLR bypass. Fix the root cause and hope mitigations buy you enough time to patch before you get owned.
> and saying this is extremely harmful for anyone that trusts claims like that.
Kind of feels like the burden is on the one who is reading it though, good luck stopping people from spreading misinformation on the internet, most of them don't even know they're wrong.
What's extremely harmful is trusting random internet comments stating stuff confidently. Get good at seeing through that, and it'll serve you well in security and beyond.
No remotely reachable vuln should be taken lightly.
At the moment though, the preconditions look odd. I've been using nginx in various constellations for 10 years and never once combined rewrite and set.
There can be situations where you set some variables on top level and then override those in the location block with rewrite. These variables could be then used e.g. in log lines or in other "global" contexts.
> ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
I disagree with this take, or I would at least phrase it differently. ASLR is like an extra password you need to guess. It has certain amount of entropy and it is usually stable. Unless vulnerability has a portion that leaks information, ASLR completely mitigates it - or you need a second vulnerability. And that is a different conversation. ASLR can completely mitigate individual vulnerability, but not possibly exploit chain.
I would use the argument of possible second vulnerability that leaks information for making people patch quickly anyway. But exploit chains are risk for all kinds of vulns.
> History shows that "meh, ASLR mitigates this" is a vastly bolder claim anyway, so I don't feel much need to defend my position here.
Obviously you need to defend, that is quite generalization there. You need to prove how the vulnerability itself reduces the entropy of ASLR.
The authors don't really give support for that. They just say that they can brute-force it without crashing the whole Nginx. But they don't say how the entropy is reduced. They have zero information where the child process even starts, whether they hit the child, or if it even is the same child. So you should provide us technical and precise reasoning why it is not mitigating?
> Not really? Looks like we have a controlled-length overflow on a fork-based server, a situation where ASLR is known to not be very useful.
It does not work like that - it has certain pre-condition requirements. You also need a reliable oracle which tells information when you actually hit the child process, whether child crashes and whether you are even in the same child. When you can retrieve this information, you are then removing re-randomization between attempts. That reduces the entropy, but it only helps if remaining search space is small enough. They don't show that they have oracle.
Additionally, for RCE, you need to find libc base and that is randomized alone. Authors just ignored in the post how they got that address. For that, you most likely need the information leak from second vulnerability, even if you can brute force the actual vulnerability.
Information leaks are not uncommon at all. nginx seems like a good target for them as well (fork + exec == no re-randomize, so you have the ability to reexec your exploit a lot of times to improve stability). edit: Seems that there's already good work in this area, I kinda forgot about brop gosh I'm old https://www.scs.stanford.edu/brop/
I suppose to keep the password analogy together, people reuse passwords all the time, timing attacks exist, etc?
For this particular bug, for that to apply, you need some sort of oracle which tells that you are actually in the same child process that skips re-randomization before you can reduce the entropy. Based on this post, I cannot see that there is stable oracle to tell that?
The idea is that ASLR bypasses are comparatively cheap, so yes, a chain without this is useless, but it's not that hard to find one. Probably much easier than the bug described here.
yeah when I read these RCE reports about public-facing software that I know about I usually upgrade them within minutes of reading the report that's why I read these reports and you really have to take them seriously because otherwise your machine gets compromised, sooner rather than later... it seems like lately there's been no advance notice on a lot of these RCE exploits that are publicly released, I mean come on guys at least give us a few minutes to upgrade our software before releasing the exploit, it feels like the late 1980s early 1990s when there was no guardrails on disclosure, i.e. all the remotely exploitable sendmail bugs. people who fail to read these reports or read them too late wind up having millions of machines being compromised because of it. currently nginx has about a 39% - 43% share of the public facing web server market today, so its pretty serious.
> It is straight up wrong to say "if you have ASLR enabled, you're not at any risk from this" and saying this is extremely harmful for anyone that trusts claims like that.
You can safely assume a 1:1 overlap between the people that claim "AI will solve cyber" (and they always say 'cyber') and the people saying this.
I think some people's comments are misinterpreted as well. When people say "the PoC requires ASLR to be disabled" that doesn't necessarily mean the exploit is useless, but it does mean that the risk of automated exploit bots downloading the PoC and pwning random servers is reduced for now.
It's a matter of time before this exploit is chained with an ASLR bypass, but it allows for a slightly wider patch window at the very least.
Is there a good alternative to Apache and Nginx that's written in a memory-safe language and not full of security holes? I briefly looked at Jetty (written in Java) and Caddy (written in Go) but they seem to have a history of vulnerabilities of other types (e.g. shell injection in Jetty) so I'm not sure they would be any better.
Caddy been a breeze to use, bit sucky model with "we have thousands of binaries depending on what combination of plugins you want" instead of a proper plugin system, but if you're building it from source, it's pretty nifty and simple anyways.
Recompiling with the features you want is a great model for a free software project. So much simpler to write and maintain compared to a plugin system that it really makes more sense in a lot of cases.
I've switched to using traefik from caddy. For simple use cases it's a little more verbose in the configuration, but for more involved things like multiple load balancing backends, rewriting paths and headers and so on I've found it really good.
That is a consequence of static linking, and an abandoned plugin package that hasn't yet been removed due to backwards compatibility.
People keep forgetting that with static linking they are back to 1980's IPC for application extensions, or building from scratch every time they need to reconfigure the application.
Any software used at the scale of Apache and nginx will have a history of vulnerabilities. The fact they both survived with their market share for so long is a good sign
On the one hand Apache and Nginx are mature and proven but, being written in C, they will always suffer from memory-safety issues like this one and the recent Apache vulnerabilities.
On the other hand, the alternatives are perhaps not as mature and perhaps not implemented as securely as they could be, given that e.g. Caddy had multiple vulnerabilities in its request parsing this year and Jetty's shell injection vulnerability seems easily foreseeable and avoidable. Using a memory-safe language doesn't help much if you then (to take an unrelated but well-known example) implement arbitrary code execution as a feature in the logging library.
LDAP feature can be removed from log4j, but buffers can't be removed from nginx. Technically, bounds checking can be implemented, but presumably nginx has no plans for it, because it's anathema.
Apache and I think Nginx have a huge list of features and stuff. Most alternate http servers limit the scope a lot, so you'd need to specify what features you're interested in.
But I haven't seen a whole lot of discussion of http servers in memory safe languages. The big three C-based servers: Apache, Nginx, and lighttpd are all pretty solid... I don't think there's a lot of people interested in giving that up for a new project just because of the language.
I'll also add that when you pick up most memory safe languages, you're also picking up their sometimes extensive runtime / virtual machine and all the accoutrements. A Java webserver probably uses log4j because any random Java project probably does, etc.
Memory safety is good, but does not protect from every threat. In this day and age infrastructure operators should familiarize themselves with proactive defenses, MAC: SElinux and AppArmor. It required much friction earlier, but there are more tools to ease the usage today.
tl;dr If you don't use ngx_http_rewrite_module, you're fine
Honestly it's such a weird feature, if you're doing complicated redirects like this in nginx where PCRE is necessary, you should do it in your application code. And if you need speed use ngx_http_lua_module.
We do this for 3 sub-domains of ardour.org; there's no application code involved, because we're rewriting historical URLs to their current form, and the "application" doesn't do that or need to do that or need to know about that.
Your opinion is that if, for a godforsaken reason, someone needs to rewrite URLs in their web server, they should avoid PCRE (something designed for string manipulation) because it's overkill, and they should use Lua (a full programming language) instead?
[OP] hetsaraiya | 18 hours ago
It triggers on a very common pattern: a `rewrite` directive (with an unnamed capture like $1/$2 and a `?` in the replacement string) followed by `set`, `if`, or another `rewrite`. The root cause is a classic two-pass script engine bug (length calculation vs. actual copy pass with ngx_escape_uri).
The PoC turns it into unauthenticated RCE using cross-request heap feng shui + pool cleanup pointer corruption. Tested with a simple Docker setup.
- Repo + Python exploit: https://github.com/DepthFirstDisclosures/Nginx-Rift - Full technical write-up: https://depthfirst.com/research/nginx-rift-achieving-nginx-r... - F5 advisory + patches (1.31.0 / 1.30.1 for OSS, plus Plus updates): https://my.f5.com/manage/s/article/K000160932 (or the latest K000161019)
Affects basically any NGINX doing URL rewriting in front of apps/PHP/etc. Workaround mentioned is switching to named captures.
The discovery angle is also interesting — it was found autonomously by depthfirst's security analysis tool after one-click onboarding of the NGINX source.
Anyone running NGINX in production using rewrite rules? How are you checking your configs? Thoughts on the exploit chain or the AI-assisted finding process?
stephenlf | 17 hours ago
hmokiguess | 17 hours ago
Twirrim | 17 hours ago
Tepix | 16 hours ago
bink | 14 hours ago
jmaw | 17 hours ago
https://world.hey.com/dhh/finished-software-8ee43637 https://josem.co/the-beauty-of-finished-software/
joecool1029 | 17 hours ago
ranger_danger | 17 hours ago
ranger_danger | 17 hours ago
syoc | 17 hours ago
shooly | 17 hours ago
How do you think versioning works? You know that it's completely arbitrary and up to the author, right? Very ironic comment.
0x457 | 17 hours ago
embedding-shape | 17 hours ago
0x457 | 12 hours ago
Doesn't change the fact that only "breaking" changes in 1.x.x line are changes to defaults.
chasd00 | 17 hours ago
TheDong | 11 hours ago
The venerable unix tool "less" is on v701 and was probably already over 300 before react was born
https://github.com/gwsw/less/releases/tag/v701
Yokohiii | 8 hours ago
danslo | 17 hours ago
Requires a "rewrite" directive with a questionmark in the replacement string, and then a subsequent "set" directive that references a regex capture group (e.g. set $var $1).
Also the POC assumes ASLR is disabled.
dsr_ | 17 hours ago
If you were to do it by hand, nginx doesn't come to mind as a likely candidate.
Bender | 15 hours ago
[1] - https://www.trapkit.de/tools/checksec/ # some Linux repositories already contain "checksec".
[2] - https://best.openssf.org/Compiler-Hardening-Guides/Compiler-...
[3] - https://news.ycombinator.com/item?id=43533516
argee | 17 hours ago
codedokode | 12 hours ago
nocman | 9 hours ago
Apache still runs about 23-28% of websites (with some measurements suggesting it is pretty close to equal with nginx). PHP is still in use by 70-80% of websites (numbers vary depending on where you look).
You make it sound like both pieces of tech are irrelevant. Nothing could be further from the truth.
some quick googled examples (like I said other sites' numbers vary, but you get the general idea):
https://www.wappalyzer.com/technologies/web-servers/ https://kinsta.com/php-market-share/
jcalvinowens | 17 hours ago
linkregister | 17 hours ago
Depth First's full writeup: https://depthfirst.com/research/nginx-rift-achieving-nginx-r...
jcalvinowens | 17 hours ago
gavinsyancey | 16 hours ago
jcalvinowens | 13 hours ago
I know they claimed they can bypass it: if that's true, they should publish it. The forking nature of nginx is uniquely bizarre and vulnerable, and I strongly suspect that's the only way they're pulling it off. I feel like that's the interesting thing here, not the buffer overrun.
linkregister | 10 hours ago
Memory corruption vulnerabilities are possible whenever a language is used that performs copies of data across buffers without in-language guards.
This vulnerability does not require knowledge of the memory layout to generate worker crashes against a system with vulnerable configurations.
The vulnerability is not the end of the world. System administrators will upgrade nginx with the security patch when it's released across most distribution paths (right now it's available only on unstable Debian for example). In the meantime sysadmins will likely remove the vulnerable directives from nginx configs.
jcalvinowens | 8 hours ago
Of course it is... in a typical threaded daemon, the threads have randomized stack addresses. Exactly as you observed, you get unlimited tries because nginx dutifully restarts the worker process with the same literal stack address every time it segfaults. I'm willing to bet the ASLR break they claim to have relies on that, but I'd be happy to be proven wrong if they publish it :)
linkregister | 8 hours ago
jcalvinowens | 5 hours ago
linkregister | 3 hours ago
ChrisArchitect | 17 hours ago
https://depthfirst.com/research/nginx-rift-achieving-nginx-r... (https://news.ycombinator.com/item?id=48126029)
https://depthfirst.com/nginx-rift (https://news.ycombinator.com/item?id=48123365)
panzi | 17 hours ago
iririririr | 16 hours ago
Most nginx use cases are to end tls and then pass the request to node/php/go/etc. So, I bet you have at least one set with attacker controller data on a line like 'proxy_set_header X-Host $host;'
edit: nvm. aparently named captures are not affect. Unless you have a $1 somewhere, it should be fine.
babuskov | 15 hours ago
wiredfool | 15 hours ago
rslashuser | 15 hours ago
aftbit | 15 hours ago
neomantra | 17 hours ago
As noted elsewhere, ASLR protects you. While you are waiting for your affected platform to get the fix, they note the mitigation:
"use named captures instead of unnamed captures in rewrite definition"
"To mitigate this vulnerability for this example, replace $1 and $2 with the appropriate named captures, $user_id and $section"
F5 patched 1.31.0 and 1.30.1.
OpenResty has a patch for 1.27 and 1.29: https://github.com/openresty/openresty/commit/ee60fb9cf645c9...
You can track OpenResty's (a Lua application server based on Nginx) progress here: https://github.com/openresty/openresty/issues/1119
RagingCactus | 17 hours ago
ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
It is straight up wrong to say "if you have ASLR enabled, you're not at any risk from this" and saying this is extremely harmful for anyone that trusts claims like that.
This wrong belief that you shouldn't care about security vulnerabilities because mitigations may make exploitation more difficult has already caused so much harm in the past. Be glad that modern mitigations exist, but patch your stuff asap. If you are a vendor, do not treat vulnerability reports as invalid because the researcher has not provided an ASLR bypass. Fix the root cause and hope mitigations buy you enough time to patch before you get owned.
embedding-shape | 16 hours ago
Kind of feels like the burden is on the one who is reading it though, good luck stopping people from spreading misinformation on the internet, most of them don't even know they're wrong.
What's extremely harmful is trusting random internet comments stating stuff confidently. Get good at seeing through that, and it'll serve you well in security and beyond.
kro | 15 hours ago
At the moment though, the preconditions look odd. I've been using nginx in various constellations for 10 years and never once combined rewrite and set.
buzer | 14 hours ago
Not extremely common, but it does happen.
nicce | 14 hours ago
I disagree with this take, or I would at least phrase it differently. ASLR is like an extra password you need to guess. It has certain amount of entropy and it is usually stable. Unless vulnerability has a portion that leaks information, ASLR completely mitigates it - or you need a second vulnerability. And that is a different conversation. ASLR can completely mitigate individual vulnerability, but not possibly exploit chain.
I would use the argument of possible second vulnerability that leaks information for making people patch quickly anyway. But exploit chains are risk for all kinds of vulns.
l23k4 | 5 hours ago
sigmarule | 4 hours ago
zrobotics | 3 hours ago
l23k4 | 2 hours ago
History shows that "meh, ASLR mitigates this" is a vastly bolder claim anyway, so I don't feel much need to defend my position here.
Edit: Even the authors of this poc seem to agree with me https://depthfirst.com/research/nginx-rift-achieving-nginx-r...
nicce | 2 hours ago
Obviously you need to defend, that is quite generalization there. You need to prove how the vulnerability itself reduces the entropy of ASLR.
The authors don't really give support for that. They just say that they can brute-force it without crashing the whole Nginx. But they don't say how the entropy is reduced. They have zero information where the child process even starts, whether they hit the child, or if it even is the same child. So you should provide us technical and precise reasoning why it is not mitigating?
l23k4 | 2 hours ago
> You need to prove how the vulnerability itself reduces the entropy of ASLR
Not really? Looks like we have a controlled-length overflow on a fork-based server, a situation where ASLR is known to not be very useful.
nicce | an hour ago
It does not work like that - it has certain pre-condition requirements. You also need a reliable oracle which tells information when you actually hit the child process, whether child crashes and whether you are even in the same child. When you can retrieve this information, you are then removing re-randomization between attempts. That reduces the entropy, but it only helps if remaining search space is small enough. They don't show that they have oracle.
Additionally, for RCE, you need to find libc base and that is randomized alone. Authors just ignored in the post how they got that address. For that, you most likely need the information leak from second vulnerability, even if you can brute force the actual vulnerability.
staticassertion | 2 hours ago
I suppose to keep the password analogy together, people reuse passwords all the time, timing attacks exist, etc?
nicce | an hour ago
saagarjha | 2 hours ago
jijji | 11 hours ago
easterncalculus | 10 hours ago
You can safely assume a 1:1 overlap between the people that claim "AI will solve cyber" (and they always say 'cyber') and the people saying this.
jeroenhd | 2 hours ago
It's a matter of time before this exploit is chained with an ASLR bypass, but it allows for a slightly wider patch window at the very least.
ptx | 17 hours ago
embedding-shape | 16 hours ago
eikenberry | 16 hours ago
seanw444 | 13 hours ago
vbernat | 15 hours ago
dboreham | 14 hours ago
sharperguy | 13 hours ago
pjmlp | 5 hours ago
People keep forgetting that with static linking they are back to 1980's IPC for application extensions, or building from scratch every time they need to reconfigure the application.
dgellow | 16 hours ago
ptx | 15 hours ago
On the one hand Apache and Nginx are mature and proven but, being written in C, they will always suffer from memory-safety issues like this one and the recent Apache vulnerabilities.
On the other hand, the alternatives are perhaps not as mature and perhaps not implemented as securely as they could be, given that e.g. Caddy had multiple vulnerabilities in its request parsing this year and Jetty's shell injection vulnerability seems easily foreseeable and avoidable. Using a memory-safe language doesn't help much if you then (to take an unrelated but well-known example) implement arbitrary code execution as a feature in the logging library.
GoblinSlayer | an hour ago
toast0 | 14 hours ago
But I haven't seen a whole lot of discussion of http servers in memory safe languages. The big three C-based servers: Apache, Nginx, and lighttpd are all pretty solid... I don't think there's a lot of people interested in giving that up for a new project just because of the language.
I'll also add that when you pick up most memory safe languages, you're also picking up their sometimes extensive runtime / virtual machine and all the accoutrements. A Java webserver probably uses log4j because any random Java project probably does, etc.
nobody42 | 12 hours ago
https://presentations.nordisch.org/apparmor/
https://github.com/nobody43/apparmor-profiles/blob/master/ng...
https://github.com/nobody43/apparmor-suggest
Disclaimer: I'm the author of both repos.
owenthejumper | 9 hours ago
GoblinSlayer | 2 hours ago
pjmlp | 16 hours ago
jhatemyjob | 16 hours ago
Honestly it's such a weird feature, if you're doing complicated redirects like this in nginx where PCRE is necessary, you should do it in your application code. And if you need speed use ngx_http_lua_module.
PaulDavisThe1st | 15 hours ago
jhatemyjob | 13 hours ago
tredre3 | 13 hours ago
Am I understanding you correctly?
jhatemyjob | 13 hours ago
GoblinSlayer | an hour ago
geophph | 14 hours ago
trilogic | 14 hours ago
JSR_FDED | 7 hours ago
First time I’ve seen feng shui used in this manner..?
saagarjha | an hour ago
100ms | 5 hours ago
toyg | 5 minutes ago