My personal blog about stuff I do, like and I'am interested in. If you have any questions, feel free to mail me!
[ Full list of blog posts
Well, I have been considering lot of options for a side project or business idea. I'm dealing with many
businesses on weekly basis and I'm technology and customer need
oriented. That's quite a good mix as far as I can tell anyone. Because I
happen to live in Finland, which is usually late in all global trends.
It makes me think that maybe I should provide something in Finnish
business environment which is already been confirmed to be a working
business model in other countries. But it needs to be something, which
can't be trivially provided globally by one service provider due to
local regulations or some other less strict restrictions like local
business knowledge or even language barrier might be a good enough
Or maybe I just should go really lean, and just write
about my ideas and see if anyone thinks any of those is any good. I'm
not afraid of anyone stealing the ideas, because most of ideas are quite
straight forward. The proper execution, marking and contacts is really
needed, to make those things work.
Actually while writing this, I
just figured out that there might be one concept that could be great
for SMB, which do not have own staff to take care of it. It's related to
virtualization & cloud, but it's very specific case. Which might be
needed due to mobile devices and legacy software used by businesses. As
far as I know, any proper IT department can handle the thing by them
selves, but I'm sure there are many customers which would need the
service, but they don't have knowledge to take care it by them selves.
Like providing accessibility to Windows applications for mobile / thin
client users using Azure Remote Desktops / Web Access. But on the other
hand, providing service to cope with legacy software isn't optimal idea.
I would prefer something that's starting to trend and hasn't been dying
Maybe I'll simply try to find out something that's
required, quite simple and hasn't been yet provided as nice HTML5 web /
If you have a perfect idea what world would need, just let me know!
In my previous post I mentioned about very bad hardware management. But I can also
assure that using even properly managed IaaS or PaaS cloud won't fix the
See Part I (Hardware & Concept)
1. Total negligence of security basics
2. Using (manufacturers / software vendors) default passwords
3. Using same password for everything ,even if it's not 2.
4. If it works, it's correctly and securely configured
5. Nobody would try to hack our system anyway, so what does it matter
6. Lack of all kind of generic security related software updates for
7. Total technical lack of understanding how things work and how things
could be made secure
I have covered ton of these areas in my blog. Using any cloud won't fix
these issues. So what if cloud provider is ultra secure, if all users
got full access to all areas and passwords are super bad, like users
first name or so in lower case or company name or something silly like
Password or the traditional 123456.
None of these things are new, and I have seen all of these in live
As I mentioned, using or not using could, won't make such a big
difference. But if you user properly managed cloud service, security is
most likely to be better than without the cloud.
As summary, cloud isn't any miracle solution. It still needs al the
normal security assessments, if those are made at all on any level.
People are afraid of cloud for no reason. First of all, Cloud is marketing term, without going into complex details, it's just simply meaningless. Any computer running at my home and connected to the Internet could be hosting IaaS, PaaS and SaaS services. So cloud itself doesn't mean anything at all, other than services provider over the Internet. Well what's the opposite of the cloud? Self hosted systems. I just today witnessed situation at the customer where their cooling system was leaking water, servers were over upside down beer crates, but UPSs were on floor under those crates and getting soaked due to the water leak from cooling system. Also the servers didn't have redundant disk systems, but a daily tape backup. Of course the tape backup tape is the same tape, inserted in the backup tape drive about five years go. I'm quite sure that the tape is well beyond state where it could be possible to restore something from it. See Part II (Software & Configuration)
That's the normal situation. So, is cloud worse or better than that?
I think cloud is much better option than 'average to poor' computer facilities.
Of course even if you're using cloud, you don't need to be stupid. You can still have encrypted daily or hourly or real-time, off site backups, to home, office or what ever location (like another cloud) if you want to. So even if the worst happens and the cloud totally disappears right now. You can still recover in reasonable time, and you won't loss any or much of the important data.
Many security and tech freaks often seem to completely forget how dreadful the reality can actually be.
Btw. I did take a photo of that water leak, UPSs and servers. I can share it if someone is interested. It's not fiction.
- Software modularity is very important. Way too many apps are just huge pot fulls of spaghetti. It makes system hard to maintain. With current technologies, application can be built using totally independent modules. Where stuff which earlier processed by process, are now processed using remote procedure call. Therefore each "procedure" of program can even scale out, automatically and be remapped or changed in real-time production system without any problems. If process was earlier a-b-c, you can on the fly change process to be a-b-d-c. As well as if b has some kind of issues you can add a-l-b-l-c steps where l is detail logger. When issue is resolved, you can just drop those logger modules from the app on the fly. All processing is based on process and data flow routing table. As well as any of the modules can be changed with different module which just implements same interface. Of course this has been the basic of OOP for long, but now it can be made bit differently. Instead of Java's dynamic linking on runtime, links can be built between processes and not classes. This also greatly simplifies application design, because end result can't be that pot of spaghetti.
I'm sure that we all have heard the story where they tell, no no, it can't be changed, because it affects whole program. Well, it shouldn't be that way.
- It's really hard to glue security to product which is built to be totally insecure.
- Some kind of strange malware got running on one undisclosed computer. It was only F-Secures Deep Guard which warned about the application when it tried to modify certain system files & configurations. Horrible stuff, browsers are so insecure. I'm really glad that F-Secure was running, because otherwise the execution attack could have been totally unnoticed. Only bad thing was that I didn't know what was changed before F-Secure warned about this ongoing threat. This lead to immediate disk wipe and re-installation of the system. Hard work, and annoying but totally mandatory to maintain system & network security. Virus Total didn't detect anything wrong with the file, even if I found copy of it from browsers cache. As well as nobody had ever scanned the file before. It seems that black list approach is nearly useless against current threats. As we all know, firewalls are only one small part of layered security. And security is hard, it's nearly impossible to be secure. When system is secure, then it's practically unusable. This is also great reason to have separate work stations for 'secure' and another for 'risky' business. Risky workstation won't contain any personal or private data, and can be easily reset to it's original state, what ever happens between. 'secure' work station is only used to process encrypted and signed text data from trusted sources. It's never directly connected to the Internet.
- These hacks just prove what I have though originally. It's totally wrong attitude to use email as login or for password recovery. It's simply insecure and pointless. - I just noticed that Digital Ocean offers password reset by email. Why? That's sure and great way to make things insecure.
- I'm tired of all this discussion if cloud is more or less secure than other types of systems. Well, it is more and it's less secure. It's totally what you're comparing to what. Without extensive analysis of both systems and defining all the details, proper analysis can't be done. I personally would say that professionally run proper PaaS / SaaS cloud is usually more secure than random server running in closet at office, which operating system and other Internet accessible server applications hasn't been updated for ages. As well as there's a broken disk in the RAID5 or even better, there's only single disk and no backups at all. But then people say that Amazon was down for X hours. So what? Your own server was down for whole weekend, but nobody just bothered to make news about it.
- Configured Tor Relay and Exit Node for VPS so I can utilize excess bandwidth for something useful.
- What is EMET?
- Wrote a PowerShell script which is run when user logs in. Script checks if user is
Administrator or if it belongs to certain user groups and then launches
suitable program for them. For admins, explorer, for power users full
featured program ui and for rest really restricted ui. All other access
is locked down using app locker, file access rights etc. After all this
tuning, I feel quite confident that system is at least semi secure
unless there's application backdoor / administrator / service access to
the server. Honest users can't do much much damage, even if they would
- Finished reading complete Design paper of Phantom Protocol by Magnus Bråding (PDF) and it's protocol implementation paper (PDF). It's quite silmilar to Tor, but just better. And just as bad what comes to global adversary due to low latency. Good thing about is that it allows high-availability anonymous tunneling, which Tor doesn't.
- Played with VirtualBox and Whonix (Anonymous Operating System), seems to be easy to setup and use. All you need to make sure, is that there's absolutely no identifying information on either of the machines. So even if the hidden machine is hacked, hackers shouldn't have any way to find out where and what machine it actually is.
- Studied MPTCP, it would be nice, as soon as it's actually usable.
- Something completely different: Studied Ice Classes, including Polar Class and Finnish-Swedish Ice Classes.
1. Lack of clear requirements - As always, we need just a system tomorrow which does everything we need. We don't even know what those things are, but it needs to fulfill those. Leads to scope creep.
2. Incompetent team - They don't have any kind of clue how hard the project is going to be. usually this is directly linked to the first factor. Because if customers team would be competent, 1st step wouldn't fail.
3. Management did not commit to the project - That happens all the time, 'someone' will take care of these things at 'sometime', but there's no clear resources allocated to the project. As guessed, it will lead to failure because someone is doing something, some day. Or maybe not? Leads to project resource starvation.
4. Service / software provider didn't give enough consultation to the customer - Well, that's true. Some customers simply say that they do have competent team and they don't want to pay for consultation. As far as I have seen, this has always lead to failure. As well as it leads to the situation where provider only acts on clear actionable requests, which are hard to make if you don't pay for consultation, you don't know what you need and your team is totally incompetent. This is not exactly the same as lack of management commitment, but usually leads to similar kind of end result.
5. Lack of change management - Yes, that's true. But what change management we're talking about, initial requirements can't change, if there are no initial requirements other than that it needs to work. So actual requirements and endless changes to those will follow.
6. Lack of end user / client personnel training - Same thing applies here than to 4th step. Customer doesn't want to pay for such services, they just assume that everything is provided and users somehow magically can use the system. But that's a fail. After all training clueless people who don't know what they need, is going to be quite much waste of time. So there should be plenty of resources reserved for this. Based on previous steps, this also means that some of the things that should have been initial requirements, start popping up at this point. - What, it doesn't do X, but we assumed ...
7. Failure to communicate - Of course there will be failure. It's really hard to find language which is understood by both sides, because failure of 2nd and 4th step. Things keep revolving around because nobody understands what they really need, because they don't even know it by them selves.
8. Stakeholder is not part of project core team / management. This usually leads to situation, where customer wants to get all kind of it would be nice to have stuff. Which usually leads to the situation, that if step 1. is done properly (rare), the cost of project will be after all so high that the stakeholder will cancel it before it really starts. Simply put, people planning the project goals and requirements, aren't the people paying for it. This also is one of the main reasons why public sector projects are often so silly. I don't want to do this task monthly, which would take me 5 minutes. But I don't really care if automating this task will cost more than my year salary.
9. Of course all of these steps combined to lead situation where estimations are inaccurate and there's huge project risk to both parties. - I have said simply no, to many projects, where all of this is evident from the very first meeting.
10. Fixing the wrong problem - This is so evident even in smaller software projects. There's problem X and it's fixed in a way that fixes just the X but doesn't fix the underlying reason for it. Utilizing the five Whys would help a lot.
11. Excess complexity - Customer wants system which is so complex it vastly exceeds their organizations capability to manage things. This is mainly linked to 2nd step. During the design, people struggle to understand what's required, complexity grows enormously. After everything is barely working, project is declared a success and complete. It doesn't take a long, until customer has huge problems with the system. They don't understand how it works and they don't have nearly necessary skills and competence to find out what's wrong with it. This usually starts different kind of project technological dept growth. Where people 'find a way to get it done', even if it's not how it was originally designed to work and this leads to further problems. After a while, project is mostly discarded and only parts of it will be used which seem to somehow keep working. This process can be accelerated by key personnel changes, where silent knowledge how things should be done is lost. - Just KISS, use very basic system which just delivers what you need, don't create a nuclear plant to heat up your sauna. It will be hideously complex and expensive thing to maintain operational.
This is a project I did a very long time ago and just for fun. I guess it was at end of year 2001 or so. Just writing old stuff, but I liked it. I just happened to have excess hardware and bandwidth.
A cluster which automatically downloads and seeds everything released at Sharerector file sharing indexing site.Technology
EDonkey2000 command line client
Java management system on one node
Nine Windows servers
10 Megabit symmetric internet connection
CLI over TCP/IP was used from management system to control individual ED2K clients on servers.
Java management system scraped Sharereactor releases page every 15 minutes for new releases. If new release was found, it was added to seed list. When there was new stuff on seed list, management system selected a node to seed the file on. Node selection was based on server's network load, free disk space, file popularity (sources + downloader count), seed ratio (sources / downloader ratio). Same parameters and file age were also used to evict files from nodes with certain loading factors.
How it worked in real world?
Well it did pretty well, and everything worked really nice. But I found out that unless my main goal is to keep seeding old stuff, only one server was more than enough to saturate bandwidth with high on demand new releases. Therefore additional servers and disk space didn't provide any extra bandwidth, but it allowed me to seed and store files way longer than those were in high demand. After all it was a very nice test / practice project for automatically managed file sharing cluster. After I had tested things and played with it for about two weeks I shut it down.
KW: emule, edonkey 2000, ed2k, file sharing, server, hosting, automated, automatically, automation, cluster, clustering, server, servers, disk space, bandwidth management, Java, Sharereactor, Overnet, ED2K, "Harness the power of 2000 electronic donkeys".
Quite random stuff, many of these things I should really write out extensively. But because these have been sitting way too long in the backlog, I'm just going to dump stuff as I have stored it in backlog. Post is totally mixed stuff. I did weed out the topics, I'm going to write hopefully soon out in more detail. Therefore, sorry, no links to sources etc.
- I wrote a light JSON / SQL RPC gateway using Python and basic auth. Works very well. Engineers said it would be too complex to make generic SQL interface. It took me about one hour to proof of concept. Query contains SQL statement and data is returned in JSON format and authentication is basic auth over SSL (HTTPS). Of course thi solution is very insecure, but it's also ultimately versatile.
- Tested Google Chrome OS with my laptop. It's very nice and light weight. All we need, are apps that run in web browser / from cloud/net. That's it, it's huge paradigm shift in computing habits. After that replacing damaged / lost computer gets way easier. I have used way too much time recovering lost data.
- Challenges of organic software growth and refactoring etc. Which finally makes product totally unmaintainable.
- Other stuff I've been studying: Software product localization, cloud security alliance, APaaS, IaaS integration as a service, private cloud, mixed cloud. Multi-tenant applications, CloudFlare, cloudfront, business process, service oriented architecture, Knowledge management, structured information, how commitment is essential for success (nothing new), information quality, enterprise architecture, facebook integration, ajax, REST, HTML5, risk analysis, business strategy, business capabilities, business intelligence, master data. Integration testing, data access, test cases, load testing, master data, file naming, record id, data verification, transactions.
- SVG files contain executable content and can be security and privacy issue.
- Studied a book about Titanium, PhoneGap, cocoonjs and native iOS / Android development options.
- Efficient feedback, automatic retry logic, extensive exception handling, database indexes, database performance, security point of view. I should write a nice blog entry about all that. DevOp role. What I found out that really didn't work out in production etc. With experience most of those things can be fixed at design state, before bad design has been implemented.
- The objective of a Cloud Services Broker (CSB) is to manage the complexity of the growing number, and types, of multi-enterprise integration. This eBook will teach you how to connect to multiple cloud providers seamlessly, leverage a CSB in your cloud strategy and integrate back-end systems in the cloud.
- Real-time integration, error handling, retries, etc. Versus traditional file based approach. How many ways there is to fail with both implementations. I'm well experienced failing in both, so I know how not to fail.
- Spent one weekend with friends talking and deeply thinking about game economics & virtual money.
- I have heard way too many times, that this product is so complex that nobody is able to understand it without several weeks of training etc. How to simplify things so it's easier to maintain and use technically complex products.
- I just wonder how people are able to use way more complex products without any official training. I would like to ask if you paid 2000 USD/EUR for your Gmail training or user certificatio? Or are you just super smart, I mean if you were able to figure it out without that mandatory (?) training.
- I also wonder how open source developer tools like python, java, mysql etc can be used without any training at all. I think it isn't required if products are properly designed and users who're going to use those have some sense what they're going to do.
- Finished reading: Web landing page and UX optimizations to providing better user experience book.
- Read several Wikipedia articles and few others about topics: Capitalism, Free trade, protectionism, economy, politics, socialism, globalization, colonialism, imperialism, nationalism, balanced trade, fair trade, Tobin tax, mercantilism, mixed economy, planned economy, social democracy, Nordic model. About five hundred pages of this stuff, just to improve my generic economy knowledge. I personally have always liked idea of free trade, and wondered farming and ship building subsidies paid by EU and Finland. What's the point, if it's bad business, you shouldn't do it.
- Business Model Design, Information Architecture, Enterprise Architecture, Engineering Systems, Business and Information Systems Engineering, Software, Systems Engineering, Engineering and Management of Information Systems. Strategic Service Platforms, Business Platforms.
- Different levels on application hosting:
- A) One physical server per customer
B) One virtual server per customer
C) Physical server with multi-instance installation (each customer having it's own separate processes & databases)
D) Physical server with only one application installation (with possible multiple processes) utilizing multi-tenant architecture
E) Several front end servers serving all customers, with multi-tenant backend server(s) (aka, server specific roles)
F) Several servers, dynamically sharing resources, starting and scaling required services while providing service to all customers
A is simplest and most inefficient solution and F is technically most complex but most efficient and fully dynamically scalable solution. Of course this directly affect system reliability etc. In cases A-D levels server outage means that those customers can't get any service. And by server outage I don't mean only hardware outage, what ever reason what causes server to be down. Even if that server would be virtual server with SAN, there still could be a reason (other that storage failure) which causes failure for whole system.
- Spotted a bug in Notepad++, cursor position changes suddenly when line number length grows. It's a clear bug. I just wonder how they were able to introduce such a bug!
- Jolla First impressions by Seravo. Well, I think it's bit over hyped, but that's totally fine, because I live in Finland and it's Finnish product.
- Started to use Mark & Don't Sweep with office fridge. We tag all stuff with small generation numbered flags weekly, or depending from container write the number directly on it. If there's stuff which is older than two weeks and untagged, it's free for anyone consume / thrown out. Some guys use copy collection method with two fridges. Weekly one fridge is emptied and after that stuff is placed to it after use, so what ever isn't being used weekly, is thrown out / used.
- Based on same theme, it's easy to purge stuff from home. What ever you haven't been using for a year, just get rid of it! If it's too hard to do this on memory basis (shouldn't be), you can use mark & sweep methodology. Get rid of stuff, you don't ever need, it makes life much simpler and more organized.
- Finished reading and studying all Secure Share documentation in detail.
- Web browsers should allow optional mode, where each tab is running as it's own isolated environment. It would efficiently prevent data leakage which is way too common with all current browsers. Basically data leak is basic feature of all current browsers and there's nothing much you can do about it.
- Because I'm completely fedup with Google, Yahoo and other big corps and their spying. Also running own server can have drawbacks. The virtual server that disappeared overnight had all my ssl and mail configurations. ;( Also maintaining own server can be very or even extremely trouble some at times. Naturally there will be some trouble exactly when your schedule is very overbooked and you're going to have one month vacation in Thailand or so. Even worse, trip around the world. So it's practically impossible to get proper connectivity and time to fix systems. That's why I want my web pages and email to be outsourced. Google provides absolutely great services, but with terrible price. Of course for something really secret pgp and confirmed key finger prints are only viable solution.
- I did plan to write about all these topics, but I got too much to write about anyway. So here's just compact list:
Networking, security, web applications, APIs, Python, XML, JSON, HTML, SOAP, HTTP, SQL, Git, Agile, business economy, busines optimization, long term investing, prioritization, making goals crystal clear, work life balance, stock picking and startups. Time management, freemium as business model.
- Many things can't be done properly because being in such hurry. Hmm, this blog is perfect example about that. That is what is expensive when talking about real products that should be delivered in volume. Customers are having constant problems, either support is going to be really expensive for service provider or customers are unhappy and discard the product. Is it better to update systems manually than using automatic updates. Endless discussion, I would love to have unit tests and automatic updates. Automatic updates would provider as solution that works where it's known to work and doesn't do random human errors while doing complex manual updates, which are hard to fix later or can cause extensive data corruption and many other major trouble. So being too busy, to do anything properly or not automating things just creates more acute problems and mess.
- Performance and suitability testing different modules for new software product: Testing databases, web servers, application servers and different mobile front-end technologi.
- Added Remote by Jason Fried to Kindle.
- I had hard time explaining to Edonkey 2000 developers that they should weight block selection by block availability, so that if blocks are rare those are downloaded first. They stubbornly claimed that purely random selection is as efficient, but it isn't. I had to send them several screenshots where there were a few rare blocks that everyone was waiting for, yet clients did choose a random missing block to download from those sources, totally ignoring it's availability aka how rare it is
Ouch, it seems that my blog about list still got entries which are more than two years old. Now I got some catching up to do.
I would prefer to have a browser with improved security model. Current browsers seem to share data between tabs and sessions. It makes request forgery attacks much easier among causing other security and privacy issues. When ever I open tab, I should be able to select if it's independent (clean) tab or if it's globally shared or forked (shared) tab. What's the difference between these choices?
Global tab would use global cookies, data store etc. So it would work as browsers now work.
If I then launch independent clean tab, it should be tab that behaves just like I would have started new private browsing (incognito) session. If I then want to open something in new tab sharing this session this far, I could open new page in new tab which would share data with previous tab just as browsers normally operate.
Tab colors could be used to make it clear which tab is sharing what sessions.
Why all this trouble? Well, I could have G+ open in one tab, Google sites in another tab, Gmail in one tab, Facebook ... But data wouldn't be shared between different services. As far as I can see, this approach would tremendously improve privacy and somewhat improve security. With this solution there would't be any reason to block 3rd party cookies.
Of course if this approach is taken to the max, any data can't be shared between tabs. Otherwise there would be easy ways to recognize users even when they're using these secure tabs.
If you have have any thoughts about this, just let me know.
Just as with Tails, if you want to do something so it's not linked to other actions you do, it's basically best to restart whole computer between things you do, to get rid of all session identifiers.
P.S. I just hate Firefox because it doesn't allow running using multiple (private mode) parallel instances. It would partially solve this problem. Btw. Also Chrome fails this simple security test. Other browsers I didn't even bother to check.
List of books I have read lately and some quotes from those.
Read: The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses (Eric Ries)
- It is epitomized in the paradoxical Toyota proverb, "Stop production so that production never has to stop." The key to the andon cord is that it brings work to a stop as soon as an uncorrectable quality problem surfaces” which forces it to be investigated. This is one of the most important discoveries of the lean manufacturing movement: you cannot trade quality for time. If you are causing (or missing) quality problems now, the resulting defects will slow you down later. Defects cause a lot of rework, low morale, and customer complaints, all of which slow progress and eat away at valuable resources.
- Therefore, shortcuts taken in product quality, design, or infrastructure today may wind up slowing a company down tomorrow.
- Similarly, the more features we added to the product, the harder it became to add even more because of the risk that a new feature would interfere with an existing feature.
- Taiichi Ohno gives the following example: When confronted with a problem, have you ever stopped and asked why five times? It is difficult to do even though it sounds easy. For example, suppose a machine stopped functioning: 1. Why did the machine stop? (There was an overload and the fuse blew.) 2. Why was there an overload? (The bearing was not sufficiently lubricated.) 3. Why was it not lubricated sufficiently? (The lubrication pump was not pumping sufficiently.) 4. Why was it not pumping sufficiently? (The shaft of the pump was worn and rattling.) 5. Why was the shaft worn out? (There was no strainer attached and metal scrap got in.) Repeating"why" five times, like this, can help uncover the root problem and correct it. If this procedure were not carried through, one might simply replace the fuse or the pump shaft. In that case, the problem would recur within a few months.
Read: Steve Blank's book The Four Steps to the Epiphany
Read: The Entrepreneur's Guide to Customer Development
Read: Single page apps in depth (a.k.a. Mixu' single page app book) (Mikito Takada)
- Good code comes from solving the same problem multiple times; or refactoring. Usually, this proceeds by noticing recurring patterns and replacing them with a mechanism that does the same thing in a consistent way - replacing a lot of "case-specific" code, which in fact was just there because we didn't see that a simpler mechanism could achieve the same thing.
- We want to solve three problems: Privacy: we want more granular privacy than just global or local to the current closure. Avoid putting things in the global namespace just so that they can be accessed. We should be able to create packages that encompass multiple files and directories and be able to wrap full subsystems into a single closure.
- Today's new code is tomorrows' legacy code; the best you can do is delay falling into the bad patterns.
- Independent packages/modules. Keep different parts of an app separate: avoid global names and variables, make each part independently instantiable and testable.
Read: Getting Things Done For Hackers (Lars Wirzenius)
My own comment:
- Lars uses pretty similar methods as all tech guys I guess. Because you have systematic way of dealing with things, you'll manage tasks well. Of course in the case you don't add too many tasks to the queue. My personal challenge is not managing things I need to do. My personal challenge is to decide at very early stage which things I'm not going to do and which things I'm going to complete. It's totally useless to start 10000 things and finish only a few. You'll end up wasting a lot of resources. Based on my experience, this is way too common problem in many companies. Things are done, using resources, but never finished, and therefore won't ever provide any benefits or in worst case, cause just a ton of trouble. This is why I love Kanban cards very much, because it makes it clear, you just can't add things to do endlessly, you simply have to focus on essential.
Read: HFT (Brogaard Jonathan)
Read: Innovator's Dilemma (Clayton Christensen)
- Introduction This book is about the failure of companies to stay atop their industries when they confront certain types of market and technological change.
- It is about well-managed companies that have their competitive antennae up, listen astutely to their customers, invest aggressively in new technologies, and yet still lose market dominance.
- Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.
- Taken in sum, these chapters present a theoretically strong, broadly valid, and managerially practical framework for understanding disruptive technologies and how they have precipitated the fall from industry leadership of some of history’s best-managed companies.
- Companies find it very difficult to invest adequate resources in disruptive technologies—lower-margin opportunities that their customers don’t want—until their customers want them. And by then it is too late.
- The fear of cannibalizing sales of existing products is often cited as a reason why established firms delay the introduction of new technologies.
- But the problem established firms seem unable to confront successfully is that of downward vision and mobility, in terms of the trajectory map. Finding new applications and markets for these new products seems to be a capability that each of these firms exhibited once, upon entry, and then apparently lost. It was as if the leading firms were held captive by their customers, enabling attacking 34 entrant firms to topple the incumbent industry leaders each time a disruptive technology emerged.
- The organizational structure viewpoint would predict that, unless they created organizationally independent groups to design flash products, established firms would stumble badly.
- Disruptive innovations are complex because their value and application are uncertain, according to the criteria used by incumbent firms.
Read: Two different finnish startup / business guides: Perustamisopas yrityskeskus 2011 ja Yrityksen perustajan opas Osuuspankki.
Read: Finnish Business Economy Book (Taxation, Legal requirements in Finland), very practical guide for (SMB) small business owners.
Read: What has worked in investing by Tweedy, Browne.
- Absolutely excellent information with backing statistics.
My own comment about everything above...
- Well, as far as I have seen, it's just like that. New, scary, unnecessary and "impossible" to make it work reliably, "who would like to use it anyway"?
Final comments, as experienced product manager, I didn't actually see anything new in these books about product development and product lifecycle. Only the HFT book contained a lot of good statistics and stuff I didn't have any kind of clue earlier.
Some interesting links and blogs to check out:
Note: quotes might not be perfectly accurate due to charset, line feeds conversions etc and other text file type conversions, possible fat fingering during text processing etc. These quotes have been sitting in my "blog about" backlog for over two years.