I recently encountered some behavior in nginx that struck me as odd. Nginx has long had a client_max_body_size setting which controls how large of a HTTP POST or file upload the server will accept by comparing to the Content-Length header specifically. Attemping to send a request larger than this results in a 413 Request Entity Too Large error from the server. But is this always the case?
3 years late... The lost dpedu.io article is finally here!
I host my projects on a dedicated server rented from OVH and I'm a huge VMware fan. Since not all my virtual machines need internet-facing IPs an easy solution here is running a pfSense install as a NAT router, as a virtual machine. Unfortunately, doing this on OVH is best documented in various blog posts across the internet. So here's to another entry to the pile.
Building a distributed / redundant data storage backbone - part 1 of 2
I'm not a fan of losing data, so I always have some sort of redundant storage at my disposable. Be it for backups, media storage, or even temp space, having a huge pool of "safe" storage is always a useful tool.
I recently (Finally!) acquired a low-end 3d printer and after a few practice prints I decided to put my new skills to the test. I wanted to create a Raspberry Pi case, but I thought that would be a little boring. So the idea I settled on was creating something akin to a blade server - an enclosure that takes care of power and lan distribution, that houses several Pis. Overkill? Absolutely!