CDN, Monitoring, Nginx, cURL, BorgBackup, NTFS
BunnyCDN tested it for a binary package delivery. Had some issues with updates hitting the servers at the same time, yet using a cheap Content Delivery Network (CDN) @ Wikipedia would also reduce bandwidth costs. And it's better and faster than Cloudflare. Of course it doesn't provide similar protections, but if we're talking just CDN it's better. That's really awesome. It seems that their POP list is missing the POP in Helsinki currently, but at least content gets delivered from it. So it's reasonable to assume they've got other unlisted pops as well. Confirmed from headers: "server: BunnyCDN-FI1-581". Yet their logged in dashboard also confirms Helsinki location. Caching of their CDN also works perfectly, absolutely great. Setup time for simple setup? Well, under a minute (using pull zone). Now I'll just go through all features and API to get a good grasp what's being offered. I've already checked pricing and it's great. Also CDN storage works great and was easy to setup. Now files are stored by the CDN and there's no need for "origin server" at all. That took two more minutes to configure. Nice logs. Traffic Manager is really nice. And they do have Optimizer for extra price. And edge rules. Tested easy to setup, works great. Yay! Slight delay when updating country blocking rules. But that's kind of expected. Also it's something which probably isn't being updated often. Tested purge features, using URLs and pull zones, worked perfectly. Other fields: "cdn-pullzone, cdn-edgestorageid, cdn-cache". One more thing to test, client generated authentication tokens. Then I'm done. Well yeah, using base64 with URLs, not a great compatibility. It seems that hey're using in their example code quite bunch of replace statements. Wonder why they're not using urlsafe_b64encode, which would omit those replace statements. And then they could finally just lstrip('=') to get of leading '=' characters. Tested and works. Yet I didn't like their example code as mentioned. But in overall, everything except their Python example code was great. It's nice to see such wonderful services. Oh yeah, there's also another thing, they're using MD5 for that secure token generation. Meh. Maybe good enough for this case, but in general it would be good idea to update the hashing algorithm. Also using f-strings instead of format to build the URL would be nice. Yet their support says that they're preparing Token Version 2, which would fix the hashing security issue I've mentioned. Nice. Of course they also got 2FA securing access to control panel etc. Would I use BunnyCDN, sure. Also checked out BelugaCDN as cheap alternative but it didn't seem as good. One interesting thing is that BunnyCDN is using CDN77 for their website.
Invalid / misleading error messages? Yeah sure. StatusCake service says "Broke/Invalid SSL Certificate". When there's no cipher match due lack of cipher overlap. I would call that BS. Hey dudes, it has nothing to do with the certificates. Got it?
Well at least cipher suite selection & configuration works correctly with SSH even if Nginx fails. So large file transfers can be done using SSH which uses firstname.lastname@example.org and is 4,5 times faster than Nginx which uses AES-256-GCM. This is actually very strange, why is AES-256-GCM being used, when it's not preferred by the server nor by the client? This is highly confusing. Clined preers AES-128-GCM and server prefers CHACHA20-POLY1305-SHA256 and yet AES-256-GCM gets used. Now it's time for daily WTF? Even if on the server and the client chacha20-poly1305 would be around 4,5 times faster. Strange combination with Firefox & Nginx.
Interesting, cURL and Firefox are talking HTTP/2 to my server directly. But if requests pass via Cloudflare then protocol is dropped to HTTP/1.1. Just an observation, why they're not using H2?
Studied backup solutions BorgBackup and Restic. My thoughts: Well there are a few requirements which are absolutely necessary: Encryption, De-duplication, Compression, Version history. There's no lack of alternatives. But which is the most suitable, that's the question. Restic seems to lack compression, that's a major bummer. When backing up normal uncompressed data, you'll be using 5 to 10x more storage space. Yet again, of course the destination system can use compressed directory, disk, etc. Everything is stackable. Another issue with it is memory consumption. It stores de-duplication data in memory. So on typical light file server which got huge disk and lousy CPU & RAM it's a problem. Borg doesn't seem to have Windows binary package. Bummer. Yet in general terms it sounds awfully lot like Duplicati in terms of features.
Really liked approach of one project manager. He said that we must start from the hardest part of the project. Because if it won't work out, then everything else doesn't matter. - That's exactly the way I usually do things and what I've been saying as well.
NTFS defragmentation not required? Just today checked some servers many showing over 80% space fragmented and interestingly two servers showing 100% space fragmented, whatever that even means. ;) But sure, defrag isn't a bad idea after all. Largest free space size 48 KB. That guarantees that basically any new files written to that drive going to be highly fragmented as well. I've also seen cases where fragmentation is 99%.