tag:blogger.com,1999:blog-3346090548501966296.post5872318009081279442..comments2024-03-26T05:03:04.875-07:00Comments on Perspectives on LedgerSMB: Further Reply to Tony MarstonChris Travershttp://www.blogger.com/profile/06211762965865744803noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-3346090548501966296.post-30173202894399250222012-01-18T06:38:54.850-08:002012-01-18T06:38:54.850-08:00"How so? If I know that a particular query is..."How so? If I know that a particular query is causing performance problems I can easily identify from where that query originated. Once I determine how to fix the query I can then change the code without any problem."<br /><br />Ok, so with optimizing a query you are basically stuck with a few specific steps:<br /><br />1) Trace which query or set of queries is the problem<br />2) Figure out how to fix the query<br /><br />Now as far as that goes, the process is no different on a stored procedure (assuming decent knowledge of the environment) than it is on dynamic SQL, except that as long as you aren't doing dynamic SQL in your stored procs, the queries there are largely self-contained. In other words, you get to that point, you fix the query, and you are done with it.<br /><br />For dynamic SQL, you have an additional few steps:<br />3) Trace the building of the query<br />4) Figure out how to fix how the query is being built.<br /><br />I don't think you can ever argue that code generators are as easy or streamlined to troubleshoot as the code that one might write. There is a fundamental difference, one of abstraction. In the stored proc case, all the code involved is in one place, quickly accessible and right there. In the other case, it may involve scrolling, etc. It's one of the reasons why I have tended to say anytime I am putting SQL code in the application logic to, as far as possible, to keep the queries as static chunks of SQL in the code, keeping as much to parameters and as little to string concatenation as possible.<br /><br />Of course like any rule there are exceptions......Chris Travershttps://www.blogger.com/profile/06211762965865744803noreply@blogger.comtag:blogger.com,1999:blog-3346090548501966296.post-58447713062347689982012-01-18T02:39:48.975-08:002012-01-18T02:39:48.975-08:00> My general experience is that the more dynami...> My general experience is that the more dynamic the SQL is, the<br />> harder it is to tune because the more abstraction layers one has<br />> to go through in order to find the problems. ORM's are usually<br />> horrible in this regard because they typically do a very large<br />> number of operations when a smaller number of operations could<br />> succeed and this is usually not fixable.<br /><br />I don't use an ORM (for reasons explained in http://www.tonymarston.net/php-mysql/object-relational-mappers-are-evil.html), so I don't have this problem.<br /><br />> Similarly dynamic SQL, where the whole query is created at<br />> run-time imposes additional overhead when trying to locate and<br />> correct performance issues.<br /><br />How so? If I know that a particular query is causing performance problems I can easily identify from where that query originated. Once I determine how to fix the query I can then change the code without any problem.Tony Marstonhttps://www.blogger.com/profile/05261538105715665206noreply@blogger.comtag:blogger.com,1999:blog-3346090548501966296.post-18967052425067687642012-01-15T22:28:23.887-08:002012-01-15T22:28:23.887-08:00First, if you'd like to discuss history, I am ...First, if you'd like to discuss history, I am always open to that. It's a fun area to discuss in part because it is very hard to prove a specific interpretation, but it is exceedingly fun to argue, discuss sources, etc.<br /><br />BTW, if you like medieval history and haven't read it, "Heaven and Earth in the Middle Ages: The Physical World Before Colombus" by Rudolf Simek is another work that very clearly breaks open the narratives we are told about Colombus, and in addition his discussion of Marco Polo's purported encounter with a unicorn is extremely insightful.<br /><br />As for Marston, I agree he is reactionary, as my post suggests. Indeed, I would suggest it's more than that: he tends to reduce options to extremes and erases the middle, therefore leaving him only extreme points of view as options. Of course if your only two options are whether to put all *possible* logic into the database* or put as little as possible into the database, and if we have a Manichean choice between them, then he is right. But the whole premise is wrong and neither extreme is very helpful.<br /><br />* There was discussion on the pgsql-general list of a person who was pausing stored procedures in process, and sending out network communications to the user to ask for confirmation of additional operations.... Ouch......<br /><br />As for client-agnostic databases, I completely agree, but there are many ways to accomplish this, hence my repeated calls to encapsulate database access behind a database-level API. This makes the database server a complete server platform in itself.<br /><br />xTuple and LedgerSMB do not share a common lineage. LedgerSMB is derived from SQL-Ledger, which I have described as a textbook for how not to write software in Perl, how not to write secure software, and how not to design databases. After a few years, though, we have some of the worst problems (including all security issues, and major data integrity issues) entirely under control and are refactoring the rest with a chain saw. However 200k of Perl isn't replaced overnight.......<br /><br />The big thing we are pioneering though which we started for 1.3 is the idea of a discoverable API, where there is enough semantic meaning in the names of the functions and arguments that the computer can do what you mean, avoiding a lot of the breakage concerns you get with stored procedures generally. So for example, if we need to support a side project that requires an additional input, we can do this without demanding that the application itself be rewritten to accommodate it. We really notice the slow-down in areas where such mappings don't work, so the productivity gains are very noticeable.<br /><br />One more point about Tony.... This is why trying to support the least common denominator of database engines is EVIL! ;-)Chris Travershttps://www.blogger.com/profile/06211762965865744803noreply@blogger.comtag:blogger.com,1999:blog-3346090548501966296.post-39832454967721660512012-01-15T20:58:20.375-08:002012-01-15T20:58:20.375-08:00As to who (Goths, Huns, Khans) motivated the regre...As to who (Goths, Huns, Khans) motivated the regression that we now call The Dark Ages, I was, of course, speaking metaphorically (and in terms of my high school world history course from some decades ago). But I'll take my medicine with equanimity. In current political parlance, Tony would be described as a reactionary: one who wishes to turn back the clock. <br /><br />As to the substance: those who've been around long enough to have used databases with integrated 4GLs on unix over RS-232 to VT-X00 terminals know how solid the resulting applications are. Such engines implemented a client-server-in-the-box semantic; the "wire" twixt the server and the client was just a patch of memory. The client code amounted to SQL stored procedures. The Web browser application paradigm is exactly the same as what came before those applications: 3270 edit programs doing what javascript does today on a disconnected connection (that's not an oxymoron) to a central machine. Really. <br /><br />Much functionality was lost with the Web browser; the gain was merely pretty pixels over characters. AJAX, and WebSockets, and similar are attempting to restore that functionality, even if those developers are utterly unaware they're re-inventing a wheel. <br /><br />If the IT world were so convinced that client-centric application development were superior, we wouldn't be seeing the Cloud meme surge ahead. Data, and its control, are being centralized, again. Those such as Tony, who fear losing billable hours to centrally constructed applications, are in for a rough patch.<br /><br />Scott McNealy was famous for "the network is the computer". What he didn't tell folks was that that meme was coined in the context of engineering workstation networks; high functioning local machines doing number crunching on miniscule amounts of shipped data. What the Tonys of the world are asserting is that distributed computing makes sense in a world of simplistic computing on lots of data. It doesn't. Never has. <br /><br />I'm not clear on the lineage, if any, with ledgerSMB, but xTuple has been implementing client agnostic database driven applications for sometime. And with success. Divorcing the logic from the client, leaving only formatting on the client, frees up client side developers to choose whatever client platform feels and looks good. Now, as clients become smaller, in both physical dimensions and computing power, 5NF databases provide a perfect fit for such smaller client machines. Rather than unsupportable 100 field screens (because that's how big the un-normalized "records" are), the phone/tablet user gets the data in logically orthogonal chunks. Depending on the client capabilities, it can ask for the normalized data in logical sequence, or the whole mish-mash at once if it's a desktop machine. You pays your money and you takes your choice.Robert Younghttps://www.blogger.com/profile/09056808374481236610noreply@blogger.com