Just a few thoughts about databases

Post date: Feb 20, 2016 6:41:57 AM

  • Read PostgreSQL Database Physical Storage documentation and a very nice blog post Introduction to Postgres Physical Storage covering some use cases and practical details. Every database got it's own names for things. Postgres uses oid some use rowid and others just use one field. I've seen many times that programmers incorrectly assume that this is some kind of good identifier, when it's not. It get changed when certain events do happen. With Multi-version Concurrency Control (MVCC) databases every write can change it and with other databases usually it gets changed when you compress or vacuum the database if any data has been deleted. Also some databases interestingly enough use unordered hash index for this, so any range queries with this id will trigger always full table scan. As example one database system doesn't allow range queries over it's integer primary key, which is enraging without full table scan. Also it's very bad idea to use rowid or similar technical field as primary key or even worse as relation identifier, because of those potential renumbering issues. Also solutions how system tables are stored vary wildly between different database engines and as well as the actual data tables are stored. Some systems use separate file for each table, index and so on and others just stuff everything into same file and some allow you to easily choose from both options.Some databases use fixed records and others use dynamic length records as well as page chaining for records too large to fit into one page. Some systems just grow page size so that one page then contains simply one record, which can be highly inefficient if record size isn't good match with page size being used. CTID (ItemPointer). The Oversized-Attribute Storage Technique (TOAST). Reminds me from so called memo / blob fields in very old database systems, which worked just like TOAST. Page / Block Spanning. Classic consideration is if the larger field should be stored in table or joined from outside allowing int used instead of some longer text. Both of these options have very different performance characteristics. I've seen both options widely used. In some cases I'm opting even for soft joining where the client application itself will fetch and cache the relation values and do not even require queries to be joined growing amount of data transferred to client. Also these block structures and possibly in record store repeated strings are one of the reasons why I see all the time databases getting compressed with standard gzip at ratio of 95% compression / 5% compressed size or even greater. Large records with lot of empty space put a great strain on cache memory and disk I/O. Also when to use joins, separate queries or larger records are affected by the commonly used setup. Is database service local, what's the round trip, how efficiently it can execute independent small queries versus very large extensive queries etc. In some cases it's very important to build one huge query and get everything at once. In some other cases you can use SQL database just like you would use any NoSL database with small simple queries. Yet the latter model can be devastatingly bad if there's high round trip between the client and the server and the protocol gets 'chatty'. Also the smartness of th DB driver is very important, does it efficiently compress joins or not. In some cases doing joins actually destroys performance, if the joins aren't smartly handled by the local client driver. Let's pull million records with all hold something like time stamp and reference to larger let's say 8KB record. Now when I fetch it, does the 8KB record get sent once or million times over the network connection if I use SQL join? Doing the same with client join could save a lot of time unless the db client driver / connector is smart enough. Is reading 100 records faster than reading 1000 records? Maybe, maybe not. Depends on many factors once again one of those is that which records might be in cache and if the pages containing the data are scattered or contiguous. Actually all 1000 records could be potentially in 1 or 2 pages / blocks. But 100 records could be each require reading a separate block from disk. As well as depending on cardinality indexes which are required to be read and so on.
  • I've been also using MSSQL (MS SQL Server) but I haven't done anything classified as complex with it. It's all basic SQL CRUD in transactions stuff mostly, no advanced features being used.
  • Sorry post isn't split into paragraphs etc. It's just thought flow when going trough the articles and wasn't supposed to initially be a separate blog entry.
  • Did the Introduction to Postgres physical storage contain anything new? - No. But it's a good idea to remind yourself about this stuff every now and then.