The first use-case is: we use triggers to assign or modify column values of mutating rows (that is, rows currently being inserted, updated, or deleted). Here's a few typical examples of this use case that, I'm sure, must look familiar to you too.
For this use-case we always use a row-level trigger (for each row), as this type of trigger allows us to manipulate :new.[column-name] variables. We can re-assign supplied (by the trigger DML-statement) column values, or initialize ones that were not supplied. So for instance the use enters all column-values but the Created and By_User columns, which are initialized by a row-trigger we wrote.
The second use-case is: we use triggers to execute insert/update/delete statements. So we perform some DML against a table. This table has some trigger, in which we perform more DML.
The third use-case is: we use triggers to execute non-transactional stuff. And by "non-transactional" I mean stuff that, once executed inside our transaction, does not participate in a rollback, in case we (the code-path) decides to perform one.
The most beautiful example of the last decade for this is, we call a web-service inside our trigger body. In case this is a web-service that actually 'does something' (changes/transacts information at/behind the endpoint of the service), then obviously when we rollback our triggering DML-statement, whatever was changed by the web-service, remains changed. Other examples of this use-case are: we send email from our trigger body, or perform file I/O. We can also perform an autonomous transaction (one that actually performs some DML which given it's executed inside the autonomous TX, cannot be rolled back by the parent transaction). This is actually a blend of this use-case with the previous one.
So what's harmful about all these use-cases? Well, they make stuff happen automagically (copyright the other TK).
What we mean by this is, that instead of only getting the effect of the triggering DML-statement, you also get the effect of whatever is being performed inside the trigger body. And exactly that is the harmful bit.
In a SQL DBMS, the insert, update and delete 'operators', constitute three primitive operators that we can use to manipulate the data held inside the database (tables). Primitive here is used in the sense of primary or basic. Sort of like Inc(rement) and Dec(rement) are primitives that can be used in machine language to increase or decrease a register's value. These primitive operators have well understood and universally known change-semantics. Say you start with an empty EMP table, you then insert a employee row, you will end up with a change of the EMP table: it now has a single row in it. And all other bits and pieces of the database remain unchanged.
Triggers executing automagic stuff change these universally well-understood change-semantics of our three primitive operators.
And this is harmful: because nobody (after you've gone) expects insert, update or delete to quietly behave differently. For those of you who have done some machine programming in a distant past: it's like increasing one of your registers, and suddenly (as an automagic side-effect) the top of the execution stack is increased also and has some subroutine's address pushed on it. Not something you'd expect...
The TCO (total cost of ownership) of an application sits not in initially building the thing. It sits in the years, sometimes decades, the thing lives. And you, but more probably others, maintain it. And more often than not these developers expect their available primitive operators to follow the primitive change-semantics.
I'll finish this post with the following thought:
If you agree on the three use-cases discussed being harmful, do you then also agree with me that:
- The cascade delete option of a foreign key, and
- The default clause of a table column, and
- The char (fixed-length) datatype
are to be considered harmful too.
Aren't they conceptually about the exact same problem? Introducing different change-semantics to our three primitive operators?
I'd be interested to hear of any other 'features' that change the way insert, update or delete behave.
"...If you agree on the three use-cases discussed being harmful..."
ReplyDeleteNo, I don't agree:
Use Case 3. "Execute non-transactional stuff" is clearly a bug. Triggers are part of the transaction, hence, they cannot be used for non-transactional stuff, so this is not harmful but a misuse of triggers.
Use Cases 1. & 2.: not harmful as far as I am concerned.
From a developer's perspective: triggers are just a programmers tool. They can be misused (as we saw).
But: they also can be used to implement quite nice features.
I use them succesfully since years in auditing my databases and also in streaming data asynchronously to various subscribers for example. Triggers are machine-generated code (PL/SQL-packages generate them) in both cases.
I could also use Oracle-Builtin-Functionality for both cases. But, most the time a don't need all the advanced features the builtins offer. Then, I find it much more comfortable to have the code in my hand, I know what it does.
I'm with marogel - automagic is not necessarily bad. Of course, some of the things that I used to do with triggers can now be done with newer features like function based indexes and virtual columns. Virtual columns in particular replace real columns that I used to populate "automagically".
ReplyDeletewhat about indexes on tables ?
ReplyDeleteare they harmful as well ?
They are maintained automagically when during insert and delete and also - when necessary - during update, so we have a quietly different behaviour here, correct ?
Nice Post!
ReplyDeleteGreat content! Thanks so much for putting effort in this work .In case if you face any technical issue in QuickBooks, you can contact Us:
Quickbooks Customer Service Phone Number