posted Sep 18, 2016, 5:59 AM by Sami Lehtinen
updated Sep 18, 2016, 6:01 AM
- I generally really dislike running 'relays' due to responsibility and reliability matters. But it seems inevitable that you've gotta run those at times. Therefore I've done small but well working high availability generic data relay which can cross protocol gaps. It allows upload, download, send and receive data over HTTP, HTTPS, FTP, FTPS, SFTP, SMTP and SMTPS + authentication for all protocols naturally. On top there's a small rule engine which processes received data. So you can send email (SMTPS) using SFTP or receive HTTPS/POST data over FTPS. Why? Well, because all programs do not support those easily, as well as some users unfortunately use dynamic IPs and some other organizations do not allow connections from dynamic IPs etc. That single module can easily bridge just so many communication gaps, that it's really vital tool for many users and organizations daily. One major benefit from using this kind of relay is proper logging. Also this service allows simple and easy queuing. Some apps are so badly coded that those do not even have proper retry logic. So this HA service is available and will receive the data. If the final destination isn't available, it doesn't matter and data gets spooled and queued for final delivery. If there's a problem, it's usually pretty easy to check relay logs and decide which side of integration group is responsible for the problem. Because otherwise they're always claiming that this fault is caused by the other party, ha ha. Then it's just easy to check logs and say: I really would appreciate if you would stop lying right now and get your junk fixed.
- This is strictly a violation of the TCP specification by CloudFlare - Random problems usually just mean you don't understand what you're doing and how things work. Normal day at the office. As example, adding comment data to SQL query stops server from crashing. My guess is that the timeout was added to deal with crappy code and then created not so obvious secondary issue with arises only in certain cases. Business as usual. Work a round to deal with bad code creates more problems. Then you're ending up having multiple layers of bad code with complex state in each layer. Business as usual. It works exactly as it's designed to work. Yet that doesn't mean it would make common sense nor it would work as you think it would. Same stuff every day. You just need to keep digging until you find the problem. This is s good example about that. I'm usually the guy who has to dig deep and find out whatever is causing the problem everyone else is claiming impossible to solve or 'totally random'.
- iMessage E2EE encryption seems flaky - Crypto is good as long as nobody takes a look at it.
- After Ubuntu distribution upgrade to 16.04 duplicity doesn't work and gives this error with my FTPS server. "Fatal error: gnutls_handshake: An unexpected TLS packet was received." Strange. Duplicity and file transfer protocols have been such a problem earlier, things are broken on multiple levels and if you change something that's broken, you'll just find out that the alternate method is broken too but in a different way. That's just so frustrating. But still business as usual, every unfortunately.
- Journey to HTTP/2 - Really nice article. Comments: It seems that many people talking about HTTP don't know about Gopher - Some people also don't know that web sites can be just as well be served over FTP than HTTP. Anonymous FTP works just as well for read only sites. - Long post, nothing new at all. Just some basics. kw: HPACK, PUSH, TCP, keep-alive, HTTP, H2, H2C, HTTP/2, PUSH_PROMISE, TLS, SPDY.