A few years ago, I had a problem. A database routine for processing bulk payments for LedgerSMB that was humming along in my test cases was failing when tested under load with real data prior to implementation. Some testing showed that while it ran fine with small numbers of inputs, it eventually slowed way down as the number of inputs rose. Andrew Sullivan, who was also on the project through a different company, suggested it was something called a "cache miss" because I was running through a loop, inserting two rows, and then updating a different table. Although he quickly abandoned that theory, his initial assessment was dead right and within a day I had a fix to the problem.
I learned there why database and application programming are so different. The same mistake I made I have seen in contributions by many other developers. I have therefore concluded that it is a universal mistake.
When we program applications, we think in terms of instructions. We break a problem down into instructions, and we order those to get the right result. We usually focus on simpler elegant code over more complex code and therefore tend to try to make things as simple as possible. This was what I was doing and it failed miserably. I have also had the misfortune of looking through several hundred line stored procedures that were obviously written by application engineers, and where the ability to think in db-land was missing. Such stored procedures are not maintainable and usually contain a lot of hidden bugs.
In database-level routines, however, it is extremely important to think in set operations. A clear, complex query with both be easier to read and easier to maintain than a set of smaller simpler queries. The database is an important math engine. Use it as one. Consequently simplicity and elegance in a stored procedure is often the ability to express the most complicated operations in the fewest database queries possible. Because SQL queries are highly structured (SELECT queries involve a column list, followed by a set of tables linked by JOIN operations, followed by filter operations, etc), it is very easy to track down problems, whether those are bugs or performance issues.
With smaller operations, like I had been doing, the database cannot optimize the operation as a whole, and one may have to contend with problems caching results in an effective way, or the like. Worse, the operations are not linked together in a predictable way and therefore problems are harder to track down when they involve specific inputs or concurrent operations.
The way we think when we code SQL (including PL/PGSQL) and when we code Perl, Python, or C is just fundamentally different.
This blog tracks development of the open source accounting and ERP software LedgerSMB. I also offer some perspectives on PostgreSQL including new features which we may find useful. Brought to you by Metatron Technology Consulting.
Saturday, February 25, 2012
Wednesday, February 15, 2012
Thoughts on LedgerSMB 1.4
LedgerSMB 1.4 is now the primary development priority of my business and some others too. There are a number of important developments already a part of the project, and more that are on their way.
LedgerSMB 1.4 will require PostgreSQL 8.4 or higher. In 1.3, we made use of the connectby() routine in the tablefunc contrib module (now extension in 9.1) but with the release of 9.1 this is hard to manage, and WITH RECURSIVE (which was new in 8.4) does the job at least as well. WITH RECURSIVE and connectby() have different traps and problems---- the former is susceptible to infinite recursion problems while the latter handles some cases painfully slowly, so there is a tradeoff. However we feel that this is worth it. Even on the fastest cases, WITH RECURSIVE is running twice as fast as connectby().
The first major improvement was to make the change above and thus remove all dependencies on contrib modules or extensions. This will make installation, upgrade, etc. far more smooth than they were for 1.3. It also provides a standard way for us to handle tree like structures.
Along the same lines, we are working on replacing the current project and department tracking interface with a more extensible one we call business reporting units. Business reporting units can be departments or projects, but they can also be engineered-to-order jobs, funds (used by non-profits), or other links to the line items on the invoice. By generalizing this out, it means we can provide a much more useful reporting system which can be used for markets where LedgerSMB is not yet suitable.
Business reporting units will support nested units. this will allow reporting on a project or a subproject, on a department or a sub-department, etc. and WITH RECURSIVE will once again be a primary element of this infrastructure.
We have already added budget logic, and there is more that is coming. There is a basic payroll framework (without locale-specific code) which is largely done. And of course new reports. There is some talk about adding forecasting as well.
LedgerSMB 1.4 will require PostgreSQL 8.4 or higher. In 1.3, we made use of the connectby() routine in the tablefunc contrib module (now extension in 9.1) but with the release of 9.1 this is hard to manage, and WITH RECURSIVE (which was new in 8.4) does the job at least as well. WITH RECURSIVE and connectby() have different traps and problems---- the former is susceptible to infinite recursion problems while the latter handles some cases painfully slowly, so there is a tradeoff. However we feel that this is worth it. Even on the fastest cases, WITH RECURSIVE is running twice as fast as connectby().
The first major improvement was to make the change above and thus remove all dependencies on contrib modules or extensions. This will make installation, upgrade, etc. far more smooth than they were for 1.3. It also provides a standard way for us to handle tree like structures.
Along the same lines, we are working on replacing the current project and department tracking interface with a more extensible one we call business reporting units. Business reporting units can be departments or projects, but they can also be engineered-to-order jobs, funds (used by non-profits), or other links to the line items on the invoice. By generalizing this out, it means we can provide a much more useful reporting system which can be used for markets where LedgerSMB is not yet suitable.
Business reporting units will support nested units. this will allow reporting on a project or a subproject, on a department or a sub-department, etc. and WITH RECURSIVE will once again be a primary element of this infrastructure.
We have already added budget logic, and there is more that is coming. There is a basic payroll framework (without locale-specific code) which is largely done. And of course new reports. There is some talk about adding forecasting as well.
Wednesday, February 8, 2012
Further Response to Robert Young
Robert Young has responded to my previous post and clarified his position. However, he is still incorrect in his assessment.
The first thing to note is that how a CPU cycles are utilized is primarily the job of the scheduler which is a part of any multitasking kernel, and different operating systems utilize very different process and thread models. These cannot be merely reduced to their hardware equivalents. If they were we would see process vs thread performance to be largely equivalent in Windows and Linux, but in fact while Windows is very thread-friendly and process-unfriendly, though my understanding is that applications on Windows running under SUA experience better multiprocess performance than under the Win32 subsystem.
On Linux and UNIX there is very little difference between a pthread and a process except for memory isolation. Individual processes can be scheduled on logical cores supporting hyperthreading, for example. The overhead for supporting multiple processes is low.
On the other hand, on Windows, threads are cheap and processes are expensive. This is one reason why I typically recommend that high-concurrency PostgreSQL instances are not run on Windows. My understanding is that SUA addresses this issue but I can't find solid comparisons at the moment and I don't know of a PostgreSQL port to that platform.
I don't see what this has to do with anything. I personally find the user friendliness of an RDBMS and *nix to be very important. I also primarily build accounting apps, and I will tell you that doing so often over the web is not very performance-friendly for reasons that I am more than happy to discuss in another post (it's one reason why long-run I want to move away from requiring that everything goes through a web interface). But basically, transactional handling is somewhat crippled when tied to a stateless network protocol like HTTP, meaning all transactions must be atomic to within a single http request, and where we would normally not do that, we have to invent ways to get around that limitation.
This is the wrong way to look at it. HTTP unconnectedness actually in many cases makes the problem worse rather than better if you have to preserve db state across HTTP requests (which LedgerSMB btw does in some cases). It's one thing to have 15 connections updating payments in a db with 10 million transactions and relying on advisory locks. It's a very different thing to have 15 users hitting a db with 10 million transactions and having to store in the db which transactions they are temporarily holding for payment. now suddenly you go from 3 sec queries to 45 sec queries, and I would expect that we could support at least 10x the number of concurrent users in a large db if it wasn't for this unconnectedness. Now you are dealing with a large amount of I/O contention and row-locking (because we have to make sure we get the right info so invoices aren't double-paid), and while it works, it isn't pretty, nor does it perform particularly well.
This being said, the DB is usually even less of an issue in these cases than the web server.....
Moreover, web apps, particularly those that run public sites, can experience traffic spikes that can easily exceed the number of connections a db can be expected to gracefully handle. So in both cases: simple web site apps like blogs accessible to the public, and complex web-based business apps, rdbms-backed web apps are worse performance-wise than thick clients.
But the overall point remains the same: scheduling of processes and threads is the OS's job, not the CPU's. Wasted cycles in scheduling are the OS's problem, not the CPU's. If an OS can't schedule processes as efficiently as it schedules threads, that's the OS's constraint, and it is only the app's problem if you are writing with an OS in mind and your OS doesn't behave the way you would like it to. Windows, in particular, is heavily oriented towards threads. It isn't clear that most *nix platforms see the same benefit, as witnessed by Oracle's architecture which assumes one process per connection.
Also a note on DB2's connection multiplexing. It isn't clear that this actually requires multiple threads to do. The fact is you can only receive a string of data over a connection. This has to be parsed and handled. It's not at all clear that this needs threads when it is clear that regardless of how you do it, it needs a queue.
BTW, PostgreSQL parallel performance is getting a LOT of attention in the move from 9.1 to 9.2.
The issue is lost cycles as one moves down to a single thread. It's more about the number of clients that can be supported "simultaneously". At one time I was also enamoured of thread count to execute queries in parallel; not so much now. For reporting generally, and BI/DW specifically, sure. But for OLTP, where my interest lies, not so much. There, client count is what matters. Clients run relatively discrete transactions on few rows; parallelism isn't much help for that sort of client. Yes, to some extent I've changed my mind.I think this makes the mistake of assuming that single threaded multiple process models cannot utilize hyperthreading architectures to increase computational efficiency.
The first thing to note is that how a CPU cycles are utilized is primarily the job of the scheduler which is a part of any multitasking kernel, and different operating systems utilize very different process and thread models. These cannot be merely reduced to their hardware equivalents. If they were we would see process vs thread performance to be largely equivalent in Windows and Linux, but in fact while Windows is very thread-friendly and process-unfriendly, though my understanding is that applications on Windows running under SUA experience better multiprocess performance than under the Win32 subsystem.
On Linux and UNIX there is very little difference between a pthread and a process except for memory isolation. Individual processes can be scheduled on logical cores supporting hyperthreading, for example. The overhead for supporting multiple processes is low.
On the other hand, on Windows, threads are cheap and processes are expensive. This is one reason why I typically recommend that high-concurrency PostgreSQL instances are not run on Windows. My understanding is that SUA addresses this issue but I can't find solid comparisons at the moment and I don't know of a PostgreSQL port to that platform.
Think of the issue this way: if it were true that scaling down threads to one per processor/core yielded a scale up in performance equivalently (only one of two threads yields twice the performance on that last thread), then the question gets more difficult. But threads and cores don't scale that way. Such reverse scaling could only happen if the core's frequency increased as the inverse of threads active. That just doesn't happen, you get a bin or two; that nasty frequency brick wall is the reason vendors have turned to multi-core and threads in the first place.I am not sure I buy that. Linux, for example, is quite happy to schedule different processes on different logical cores, thus meaning that two processes can share a hyperthreading core in the same way two threads can. Again, that behavior is specific to a combination of a CPU, OS and scheduler.
Unless you've been building database applications from before 1995 (or thereabouts), which is to say only webbie thingees, the efficiency and user friendliness of *nix/RDBMS/RS-232/terminal is a foreign concept; just as it still is for mainframe COBOL coders writing to 3270 terminals, same semantics (I'll grant that javascript is more capable than 3270 edit language, and Wikipedia does it again, with this write up). In the web age, AJAX was the first widely known (and possibly absolute first) attempt to get back to that future, then there is Comet. Now, we have WebSocket, discussed in earlier posts.
I don't see what this has to do with anything. I personally find the user friendliness of an RDBMS and *nix to be very important. I also primarily build accounting apps, and I will tell you that doing so often over the web is not very performance-friendly for reasons that I am more than happy to discuss in another post (it's one reason why long-run I want to move away from requiring that everything goes through a web interface). But basically, transactional handling is somewhat crippled when tied to a stateless network protocol like HTTP, meaning all transactions must be atomic to within a single http request, and where we would normally not do that, we have to invent ways to get around that limitation.
The Achilles heel of those *nix/RDBMS/RS-232/VT-X00 applications was the limit on the number of clients that could be accommodated since the client patch remained while the user kept a session. The emerging hardware/infrastructure that I'm discussing may/should remove that restriction. This also means that developers must heed the mantra, "the user doesn't determine transaction scope". With HTTP (un)connectedness, that isn't much of an issue.
This is the wrong way to look at it. HTTP unconnectedness actually in many cases makes the problem worse rather than better if you have to preserve db state across HTTP requests (which LedgerSMB btw does in some cases). It's one thing to have 15 connections updating payments in a db with 10 million transactions and relying on advisory locks. It's a very different thing to have 15 users hitting a db with 10 million transactions and having to store in the db which transactions they are temporarily holding for payment. now suddenly you go from 3 sec queries to 45 sec queries, and I would expect that we could support at least 10x the number of concurrent users in a large db if it wasn't for this unconnectedness. Now you are dealing with a large amount of I/O contention and row-locking (because we have to make sure we get the right info so invoices aren't double-paid), and while it works, it isn't pretty, nor does it perform particularly well.
This being said, the DB is usually even less of an issue in these cases than the web server.....
Moreover, web apps, particularly those that run public sites, can experience traffic spikes that can easily exceed the number of connections a db can be expected to gracefully handle. So in both cases: simple web site apps like blogs accessible to the public, and complex web-based business apps, rdbms-backed web apps are worse performance-wise than thick clients.
But the overall point remains the same: scheduling of processes and threads is the OS's job, not the CPU's. Wasted cycles in scheduling are the OS's problem, not the CPU's. If an OS can't schedule processes as efficiently as it schedules threads, that's the OS's constraint, and it is only the app's problem if you are writing with an OS in mind and your OS doesn't behave the way you would like it to. Windows, in particular, is heavily oriented towards threads. It isn't clear that most *nix platforms see the same benefit, as witnessed by Oracle's architecture which assumes one process per connection.
Also a note on DB2's connection multiplexing. It isn't clear that this actually requires multiple threads to do. The fact is you can only receive a string of data over a connection. This has to be parsed and handled. It's not at all clear that this needs threads when it is clear that regardless of how you do it, it needs a queue.
BTW, PostgreSQL parallel performance is getting a LOT of attention in the move from 9.1 to 9.2.
Monday, February 6, 2012
Robert Young is Wrong about Threads and PostgreSQL
Robert Young has claimed, not once but twice, that a multi-threaded engine is necessary for database performance to be good enough. I am going to explain here why he is wrong.
Mr Young's argument more or less goes something like this: threads are more efficient, processor-power-wise to spin up than processes, and therefore if you insist on a single-threaded multi-process model, you can't have sufficient performance to make your database system relevant.
There are several important problems with this thinking. The first is that for it to be significant, the thread vs process overhead, plus related IPC, etc. must be a significant portion of actual CPU load. If this is not the case, the difference largely gets lost in the noise. In my experience in looking at databases, I have yet to identify one case where this is the case. Instead things like network connection/teardown, command parsing, and, well, actual work dwarf process/thread startup cycles by several orders of magnitude. This means that the actual startup processing overhead is likely insignificant in the overall scheme of things,
Of course actually starting up a process or thread is not all there is to the picture. There's communication between processes or threads to take into account. There is also the added complexity of locking shared resources in a threaded vs process model. And there are of course the costs that come with a lack of process isolation on the other side. These all have to be weighed against eachother, and it is not clear who the winner is. In fact even these costs will dwarf the startup costs by a significant margin.
Rather the reason why many RDBMS's have gone to a multi-threaded model is that it is one way, perhaps even an elegant way, to support intraquery parallelism. In other words if you can spin up a few more threads in your process, those threads can be conducting scans on different partitions in different table spaces, and thus overall increasing the resource utilization of a given query. This is an important feature and it is currently lacking on PostgreSQL. Postgres-XC however is a project that offers such a solution based on a multi-server, distributed model. It is perhaps not quite ready for prime time but is getting there.
But while a multi-threaded engine would make this sort of parallelism easier to implement, it isn't clear to me that either it is necessary or that the cost in complexity would be worth it compared to multi-process and even multi-server models. Threaded systems are far harder to troubleshoot, far less transparent, and far more brittle than process-oriented systems.
Note that each approach has strong points and weak points. Declaring a specific architecture as a clear winner forgets that these differences exist. Ideally eventually PostgreSQL would support multi-server and multi-process methods of intraquery parallelism. I am not convinced that we need to support a multi-threaded method in addition.
Update: Robert Young also links to an IBM piece talking about multiplexing network connections. However the additional overhead here is going to be specific to the case where you have enough simple queries on their own connections to justify the additional complexity. Yes, you can write benchmarks where you see a small but measurable benefit. However, it is not clear that these benchmarks necessarily map to any significant portion of real-world scenarios. Certainly in the sorts of programming I do, I would expect no performance gain whatsoever to such a model.
Mr Young's argument more or less goes something like this: threads are more efficient, processor-power-wise to spin up than processes, and therefore if you insist on a single-threaded multi-process model, you can't have sufficient performance to make your database system relevant.
There are several important problems with this thinking. The first is that for it to be significant, the thread vs process overhead, plus related IPC, etc. must be a significant portion of actual CPU load. If this is not the case, the difference largely gets lost in the noise. In my experience in looking at databases, I have yet to identify one case where this is the case. Instead things like network connection/teardown, command parsing, and, well, actual work dwarf process/thread startup cycles by several orders of magnitude. This means that the actual startup processing overhead is likely insignificant in the overall scheme of things,
Of course actually starting up a process or thread is not all there is to the picture. There's communication between processes or threads to take into account. There is also the added complexity of locking shared resources in a threaded vs process model. And there are of course the costs that come with a lack of process isolation on the other side. These all have to be weighed against eachother, and it is not clear who the winner is. In fact even these costs will dwarf the startup costs by a significant margin.
Rather the reason why many RDBMS's have gone to a multi-threaded model is that it is one way, perhaps even an elegant way, to support intraquery parallelism. In other words if you can spin up a few more threads in your process, those threads can be conducting scans on different partitions in different table spaces, and thus overall increasing the resource utilization of a given query. This is an important feature and it is currently lacking on PostgreSQL. Postgres-XC however is a project that offers such a solution based on a multi-server, distributed model. It is perhaps not quite ready for prime time but is getting there.
But while a multi-threaded engine would make this sort of parallelism easier to implement, it isn't clear to me that either it is necessary or that the cost in complexity would be worth it compared to multi-process and even multi-server models. Threaded systems are far harder to troubleshoot, far less transparent, and far more brittle than process-oriented systems.
Note that each approach has strong points and weak points. Declaring a specific architecture as a clear winner forgets that these differences exist. Ideally eventually PostgreSQL would support multi-server and multi-process methods of intraquery parallelism. I am not convinced that we need to support a multi-threaded method in addition.
Update: Robert Young also links to an IBM piece talking about multiplexing network connections. However the additional overhead here is going to be specific to the case where you have enough simple queries on their own connections to justify the additional complexity. Yes, you can write benchmarks where you see a small but measurable benefit. However, it is not clear that these benchmarks necessarily map to any significant portion of real-world scenarios. Certainly in the sorts of programming I do, I would expect no performance gain whatsoever to such a model.
Sunday, February 5, 2012
LedgerSMB talk at FOSDEM
Hi,
I just arrived at FOSDEM - Brussels. Due to last minute arrangements, my talk about LedgerSMB isn't in the schedule. So, here's the announcement:
There will be a LedgerSMB talk in the Perl devroom (AW1.121)
between 12.25 and 12.45
The talk is on the fosdem site: http://fosdem.org/2012/ schedule/event/ledgersmb_perl
With kind regards,
Erik.