Home ···· Cheap Watch · do-not-zip.js · npm2git.sh · Defiler · Cheap TS · Memor · Contexty ···· Sites · Other
// The big picture server-side thing
// npm packages used
// Docker images used
// How this works
On the build side
Very little data crunching is actually done on the server that is sending you these pages — almost everything is precomputed/precompiled on the build server — i.e., my personal computer.
A bunch of custom string wrangling happens as part of the build process to turn various text files into the things web apps here consume.
Some custom building/compiling/minifying of everything also happens using the above tools during the build.
Everything has ETags generated and is then brotli-compressed and written out into one big data file which, along with some other things, is then deployed to the server.
On the server side
Everything lives in a number of Docker containers and volumes.
The actual web server reads the previously mentioned big data file into memory (it isn't that much data), and then happily blindly spits out data based on the request URL, without needing further disk access for most requests.
Absolutely no regard is given to whether the client specified Accept-Encoding: br or not, because if your client doesn't accept brotli-encoded responses you are terrible.
Luckily, since you're reading this, you have one fewer avenue of terribleness — a load off your mind I am sure.
Larger things, like most images and downloads, live as regular files in the Docker volume, and are streamed to the client as required.
These files are indexed along with ETags in that main data file that is read into memory, so no disk access is required for requests returning 404 or 304.
Still other things, like the private Git server, live in their own Docker containers and volumes, and are proxied to by the main web server process.