Windows Remote Desktop Services, APT, Cognitive Overhead, Misc
Post date: Nov 9, 2013 7:30:42 AM
Studied deeply, tested and played with Remote Desktop Services and stuff. Delivering (full) Windows desktop experience for users from centralized server. Excellent for Bring Your Own Device (BYOD) work, because what you really need to install to the end device, is Remote Desktop Client. Therefore Android, iOS, OSX, Linux, etc users all get full Windows desktop experience or when required only Remote Application (TS RemoteApp). I did study this topic quite deeply, configuring and hardening Windows Terminal Services system. Important things to remember in this case are limiting memory usage, sharing system resources per user instead of per process using Windows System Resource Management, strictly limiting applications which users can run using AppLocker, limiting time how long processes will run on server when connection is idle or disconnected, using disk quotas, The Enhanced Mitigation Experience Toolkit (EMET). Dropping unnecessary services using Security Configuration Wizard (SCW). As well as obvious basic stuff like Group Policies etc. It's bit sad that I did let my Windows server go, I had to use Virtual Windows 2012 Datacenter Edition for all this testing as well as several different tablets. For efficiency reasons I didn't explore VDI because it requires too much hardware per user. Remote Desktop Services and RemoteApp are exactly what I were looking for this particular case. Things that need to work, work, and everything else except very restricted set of applications are completely disabled. Yes, I also configured and created custom shell, which allows only starting listed applications easily and logging out without all the mess that's related with traditional complex desktops which aren't optimal for small touch screen use. I have only something like 20 .exe files listed using AppLocker. Where many are Windows must be allowed to run binaries for allowing remote app and remote desktop login process to work at all. So everything is locked down and hardened as much as possible following principle of least privilege. Also checked out WINDOWS SERVER 2008 R2 SECURITY TECHNICAL IMPLEMENTATION GUIDE (STIG) from DISA, but after all these measures were mostly related to AD and in case of TS/RDP usage only it didn't contain anything meaningful. Also checkout Desktop Virtualization for general info. (DaaS, Desktop as a Service, Azure RDS, Mohoro, Drawbridge)
Read: Security information and event management (SIEM) article. Nothing new, all parts of SIEM are very familiar and followed, but I have to admit that I haven't ever used integrated SIEM which is the whole point here, to get data from multiple sources analyzed and correlated together.
Studied: Statistically effective protection against APT attacks - Study on effectiveness of popular defense measures and APT in Finland. I have to confess that usually and luckily attacks which have been successful are mostly common malware. Even if they are able to access systems with valuable information, as far as I know, that information hasn't been ever abused or downloaded. So the attackers did have access to the 'treasure room' but didn't exploit the possibility. This only tells me that the attackers usually have in control so huge amount of computer, that they won't even check properly what they have gained access to. Which is basically a great thing. Another huge question is "cleaning computers", I wouldn't ever accept it, but many people insist that it's too much annoyance to drop new image to computer and reconfigure stuff. Therefore often malware and such is just removed from computer using a few virus scanners / anti-virus tools. As well all well know, this wouldn't do anything to APT or any other more sophisticated targeted attack. They can easily evade this kind of detection. Btw. This is one more reason why AppLocker is used to locks down execution of binaries using binary hash of the executable. If it's has been modified, it won't run at all. As well as all foreign binaries, even if someone would be able to get one in, won't run.
Cognitive Overhead, this is something I can completely agree about. Good products should be products that work simply and intuitively. If user needs to read 1500 page (non sparse) manual to be able to use the product, then it's really hard even for professionals after a while. Because nothing just works out, you'll have to have memory rule or memo after memo, how to do these things, because everything is seemingly unrelated. Been there, done that. And it's mentally excruciatingly painful. Especially when production system is down, you're really tired after 12 hours (or more) working non-stop and pressure and stress is on, and you'll know that it's something you need to do, but just don't remember it right now. Flip one parameter of 10000 configuration parameters or reset some random database field in some random table. It's easy, if you KNOW what you're doing, but it's especially painful when you know that you have done this thing 5 years ago, and now it seems to be the same problem, but you don't exactly remember what needs to be done to get it fixed. How wonderful it would be, if it just would work, without all this pain.
Studied: GoLang short concurrency guide, Concurrency is not Parallelism, it's better.
Wondered what's the morale of telcos. "Well, my friend got really old laptop, and she complained that Internet was slow and required upgrade. What the operator did? They sold her upgrade from 20Mbit/s to 110Mbit/s. You can guess 10 times (binary) if that did help at all, or as boolean True or False." - Basic questions is, are they on purpose scamming people or are they just do dumb at technical customer service that they don't understand a bit about how things are.
It seems that I have to do something for my comps air circulation. Actually I assume this happened when I had the cover open for extended period. If thid doesn't tell you anything, it's drives temperature smart data.
190 Airflow_Temperature_Cel 0x0022 068 032 045 Old_age Always In_the_past 32 (6 18 32 28 0)
Read out about F2FS, it's interesting that it uses to different allocation policy to work around performance problems generally related to log structured file systems (like YAFFS2) when disk is near full. (Threaded log) as well studied Flash Memory article and related links in detail. Difference between NOR and NAND Read Disturb, Write Endurance, Transfer Rates. Quite interesting topic, even I don't think it has any practical meaning, only benefit is knowledge to avoid random writes, especially with poor flash controllers.
Read bit more about ext4 details. And enabled data in lining for my ext4 drives. It seems to be disabled by default. It won't make huge difference anyway, because inode size on ext4 is only 256 bytes, so only very small files 60 bytes to be exact can be in lined.
Some older stuff, in compact dump form.