Joined: 24-Jul-2009
Hi Otis,
We bought 2 licenses for starting a pilot project ( t is in SQL server 2008 ). If all things go well we will use it for a rather huge firebird project. A winform application using a light client and an application server using WCF. I tested these things and it seems to work fine.
But, the support for firebird 2.1 isn't that good: From version 2, firebird is improved dramatically and version 2.1 added also a lot of stuff.
After some things I discovered, you introduced a new Setting: FirebirdDQECompatibilityLevel Maybe we can use this to further enhance the FirebirdDQE.
Main things:
From v2.0.x : * Support for derived tables ==> CreateRowCountDQ can be implemented ==> All Linq statements with derived tables should be supported (I think you already did that) ==> Aliases work completely now also for sub-queries (You already fixed that for me)
Following things should enhance CreateFunctionMappingStore() * Native IIF statement: IIF (<search_condition>, <value1>, <value2>) * The built-in function SUBSTRING() can now take arbitrary expressions in its parameters. * New [NOT] DISTINCT Test Treats Two NULL Operands as Equal: example:
Val1 is distinct from Val2
, is the same as
((val1 is null) and (val2 is not null)) or ((val1 is not null) and (val2 is null)) or (val1 <> val2)
- Support for context variables. The context can be System, Session, Transaction: This things makes it possible to feed Triggers/stored procedures with additional data. For example:
UserModified = coalesce(rdb$get_context('USER_TRANSACTION', 'User_ID'), current_user)
. in web applications were probably always connections pooling is used and only one DB-user is created for all sessions, it is cumberstone in some engines to know in triggers/stored procedure who is behind the browser, this solves the problem very easily. * RETURNING Clause for Insert Statements, this is very important regarding Sequences/generators. Now in the system we have an extra round trip to determine the surrogate PK key of the table if it a trigger assigned the key using a generator/sequence. BTW if between the insert and the
select gen_id(<generator>,0) from rdb$database
query another concurrent insert occurred you will NOT get the correct generator!!! This is why it is important to use from v2.0x the following syntax: insert into TableFoo(...) returning PKField into :MyPKVar. This is always save and no extra round trips are made. BTW the output clause in sql-server does the same but it can return more than one row. Implementing this is necessary for a high transactional database, it guarantees that you will have the correct PKField value (if generated by a trigger) * EXECUTE BLOCK (You can send multiple SQL statements in one go, it isn't the same as a batch in SQL-server but it can be used to speed-up bulk inserts/updates) . It is like an anonymous stored procedure, its length is limited to 64K byte source code. For example you can write more then one insert statement:
EXECUTE BLOCK [ (param datatype = ?, param datatype = ?, ...) ]
[ RETURNS (param datatype, param datatype, ...) }
AS
[DECLARE VARIABLE var datatype; ...]
BEGIN
...
END
Examples:
EXECUTE BLOCK (X INTEGER = :X) RETURNS (Y VARCHAR)
AS
DECLARE V INTEGER;
BEGIN
INSERT INTO T(...) VALUES (... :X ...);
SELECT ... FROM T INTO :Y;
SUSPEND;
END
execute block (
x1 bigint, = @x1, y1 varchar(10) = @y1, z1 date = @z1,
x1 bigint, = @x1, y1 varchar(10) = @y1, z1 date = @z1,
x1 bigint, = @x1, y1 varchar(10) = @y1, z1 date = @z1)
returns(
pk1 bigint,
pk2 bigint,
pk3 bigint)
as
begin
insert into foo(x,y,z) values(:x1, :y1, :z1) returns pk into :pk1; suspend;
insert into foo(x,y,z) values(:x2, :y2, :z2) returns pk into :pk2; suspend;
insert into foo(x,y,z) values(:x3, :y3, :z3) returns pk into :pk3; suspend;
end
/*
Returns 3 rows with 1 column, the value of the primary key.
*/
From v2.1.x :
Built-in functions:
ABS
Function: Returns the absolute value of a number.
Format: ABS( <number> )
Example: select abs(amount) from transactions;
ACOS
Function: Returns the arc cosine of a number.
Format: ACOS( <number> )
Notes: Argument to ACOS must be in the range -1 to 1 and returns a value in the range 0 to PI.
Example: select acos(x) from y;
ASCII_CHAR
Function: Returns the ASCII character with the specified code.
Format: ASCII_CHAR( <number> )
Notes: Argument to ASCII_CHAR must be in the range 0 to 255 and returns a value with NONE character set.
Example: select ascii_char(x) from y;
ASCII_VAL
Function: Returns the ASCII code of the first character of the specified string.
Format: ASCII_VAL( <string> )
Notes: 1) Returns 0 if the string is empty. 2) Throws error if the first character is multi-byte.
Example: select ascii_val(x) from y;
ASIN
Function: Returns the arc sine of a number.
Format: ASIN( <number> )
Notes: Argument to ASIN must be in the range -1 to 1 and returns a value in the range -PI / 2 to PI / 2.
Example: select asin(x) from y;
ATAN
Function: Returns the arc tangent of a number.
Format: ATAN( <number> )
Notes: Returns a value in the range -PI / 2 to PI / 2.
Example: select atan(x) from y;
ATAN2
Function: Returns the arc tangent of the first number / the second number.
Format: ATAN( <number>, <number> )
Notes: Returns a value in the range -PI to PI.
Example: select atan2(x, y) from z;
BIN_AND
Function: Returns the result of a binary AND operation performed on all arguments.
Format: BIN_AND( <number> [, <number> ...] )
Example: select bin_and(flags, 1) from x;
BIN_OR
Function: Returns the result of a binary OR operation performed on all arguments.
Format: BIN_OR( <number> [, <number> ...] )
Example: select bin_or(flags1, flags2) from x;
BIN_SHL
Function: Returns the result of a binary shift left operation performed on the arguments (first << second).
Format: BIN_SHL( <number>, <number> )
Example: select bin_shl(flags1, 1) from x;
BIN_SHR
Function: Returns the result of a binary shift right operation performed on the arguments (first >> second).
Format: BIN_SHR( <number>, <number> )
Example: select bin_shr(flags1, 1) from x;
BIN_XOR
Function: Returns the result of a binary XOR operation performed on all arguments.
Format: BIN_XOR( <number> [, <number> ...] )
Example: select bin_xor(flags1, flags2) from x;
CEIL | CEILING
Function: Returns a value representing the smallest integer that is greater than or equal to the input argument.
Format: { CEIL | CEILING }( <number> )
Example: 1) select ceil(val) from x; 2) select ceil(2.1), ceil(-2.1) from rdb$database; -- returns 3, -2
COS
Function: Returns the cosine of a number.
Format: COS( <number> )
Notes: The angle is specified in radians and returns a value in the range -1 to 1.
Example: select cos(x) from y;
COSH
Function: Returns the hyperbolic cosine of a number.
Format: COSH( <number> )
Example: select cosh(x) from y;
COT
Function: Returns 1 / tan(argument).
Format: COT( <number> )
Example: select cot(x) from y;
DATEADD
Function: Returns a date/time/timestamp value increased (or decreased, when negative) by the specified amount of time.
Format: DATEADD( <number> <timestamp_part> TO <date_time> ) DATEADD( <timestamp_part>, <number>, <date_time> )
timestamp_part ::= { YEAR | MONTH | DAY | HOUR | MINUTE | SECOND | MILLISECOND }
Notes: 1) YEAR, MONTH, DAY and WEEKDAY could not be used with time values. 2) HOUR, MINUTE and SECOND could not be used with date values. 3) All timestamp_part values could be used with timestamp values.
Example: select dateadd(-1 day for current_date) as yesterday from rdb$database;
DATEDIFF
Function: Returns an exact numeric value representing the amount of time from the first date/time/timestamp value to the second one.
Format: DATEDIFF( <timestamp_part> FROM <date_time> TO <date_time> ) DATEDIFF( <timestamp_part>, <date_time>, <date_time> )
Notes: 1) Returns positive value if the second value is greater than the first one, negative when the first one is greater or zero when they are equal. 2) Comparison of date with time values is invalid. 3) YEAR, MONTH, DAY and WEEKDAY could not be used with time values. 4) HOUR, MINUTE and SECOND could not be used with date values. 5) All timestamp_part values could be used with timestamp values.
DECODE
Function: DECODE is a shortcut to CASE ... WHEN ... ELSE expression.
Format: DECODE( <expression>, <search>, <result> [ , <search>, <result> ... ] [, <default> ]
Example: select decode(state, 0, 'deleted', 1, 'active', 'unknown') from things;
EXP
Function: Returns the exponential e to the argument.
Format: EXP( <number> )
Example: select exp(x) from y;
FLOOR
Function: Returns a value representing the largest integer that is less than or equal to the input argument.
Format: FLOOR( <number> )
Example: 1) select floor(val) from x; 2) select floor(2.1), floor(-2.1) from rdb$database; -- returns 2, -3
GEN_UUID
Function: Returns a universal unique number.
Format: GEN_UUID()
Example: insert into records (id) value (gen_uuid());
HASH
Function: Returns a HASH of a string.
Format: HASH( <string> )
Example: select hash(x) from y;
LEFT
Function: Returns the substring of a specified length that appears at the start of a string.
Format: LEFT( <string>, <number> )
Example: select left(name, char_length(name) - 10) from people where name like '% FERNANDES';
--
LN
Function: Returns the natural logarithm of a number.
Format: LN( <number> )
Example: select ln(x) from y;
LOG
Function: LOG(x, y) returns the logarithm base x of y.
Format: LOG( <number>, <number> )
Example: select log(x, 10) from y;
LOG10
Function: Returns the logarithm base ten of a number.
Format: LOG10( <number> )
Example: select log10(x) from y;
LPAD
Function: LPAD(string1, length, string2) appends string2 to the beginning of string1 until length of the result string becomes equal to length.
Format: LPAD( <string>, <number> [, <string> ] )
Notes: 1) If the second string is omitted the default value is one space. 2) The second string is truncated when the result string will become larger than length.
Example: select lpad(x, 10) from y;
MAXVALUE
Function: Returns the maximum value of a list of values.
Format: MAXVALUE( <value> [, <value> ...] )
Example: select maxvalue(v1, v2, 10) from x;
MINVALUE
Function: Returns the minimun value of a list of values.
Format: MINVALUE( <value> [, <value> ...] )
Example: select minvalue(v1, v2, 10) from x;
MOD
Function: MOD(X, Y) returns the remainder part of the division of X by Y.
Format: MOD( <number>, <number> )
Example: select mod(x, 10) from y;
OVERLAY
Function: OVERLAY( <string1> PLACING <string2> FROM <start> [ FOR <length> ] ) returns string1 replacing the substring FROM start FOR length by string2.
Format: OVERLAY( <string> PLACING <string> FROM <number> [ FOR <number> ] )
Notes: 1) If <length> is not specified, CHAR_LENGTH( <string2> ) is implied. 2) The OVERLAY function is equivalent to: SUBSTRING(<string1>, 1 FOR <start> - 1) || <string2> || SUBSTRING(<string1>, <start> + <length>)
--
PI
Function: Returns the PI constant (3.1459...).
Format: PI()
Example: val = PI();
POSITION
Function: Returns the position of the first string inside the second string starting at an offset (or from the beginning when omitted).
Format: POSITION( <string> IN <string> ) POSITION( <string>, <string> [, <number>] )
Example: select rdb$relation_name from rdb$relations where position('RDB$' IN rdb$relation_name) = 1;
POWER
Function: POWER(X, Y) returns X to the power of Y.
Format: POWER( <number>, <number> )
Example: select power(x, 10) from y;
RAND
Function: Returns a random number between 0 and 1.
Format: RAND()
Example: select * from x order by rand();
REPLACE
Function: REPLACE(searched, find, replacement) replaces all occurences of "find" in "searched" by "replacement".
Format: REPLACE( <string>, <string>, <string> )
Example: select replace(x, ' ', ',') from y;
REVERSE
Function: Returns a string in reverse order.
Format: REVERSE( <value> )
Notes: REVERSE is an useful function to index strings from right to left.
Example: create index people_email on people computed by (reverse(email)); select * from people where reverse(email) starting with reverse('.br');
RIGHT
Function: Returns the substring of a specified length that appears at the end of a string.
Format: RIGHT( <string>, <number> )
Example: select right(rdb$relation_name, char_length(rdb$relation_name) - 4) from rdb$relations where rdb$relation_name like 'RDB$%';
ROUND
Function: Returns a number rounded to the specified scale.
Format: ROUND( <number> [, <number> ] )
Notes: If the scale (second parameter) is negative, the integer part of the value is rounded. Ex: ROUND(123.456, -1) returns 120.000.
Examples: select round(salary * 1.1, 0) from people;
RPAD
Function: RPAD(string1, length, string2) appends string2 to the end of string1 until length of the result string becomes equal to length.
Format: RPAD( <string>, <number> [, <string> ] )
Notes: 1) If the second string is omitted the default value is one space. 2) The second string is truncated when the result string will become larger than length.
Example: select rpad(x, 10) from y;
SIGN
Function: Returns 1, 0, or -1 depending on whether the input value is positive, zero or negative, respectively.
Format: SIGN( <number> )
Example: select sign(x) from y;
SIN
Function: Returns the sine of a number.
Format: SIN( <number> )
Notes: Argument to SIN must be specified in radians.
Example: select sin(x) from y;
SINH
Function: Returns the hyperbolic sine of a number.
Format: SINH( <number> )
Example: select sinh(x) from y;
SQRT
Function: Returns the square root of a number.
Format: SQRT( <number> )
Example: select sqrt(x) from y;
TAN
Function: Returns the tangent of a number.
Format: TAN( <number> )
Notes: Argument to TAN must be specified in radians.
Example: select tan(x) from y;
TANH
Function: Returns the hyperbolic tangent of a number.
Format: TANH( <number> )
Example: select tanh(x) from y;
TRUNC
Function: Returns the integral part (up to the specified scale) of a number.
Format: TRUNC( <number> [, <number> ] )
Notes: If the scale (second parameter) is negative, the integer part of the value is truncated. Ex: TRUNC(123.456, -1) returns 120.000.
Example: 1) select trunc(x) from y; 2) select trunc(-2.8 ), trunc(2.8 ) from rdb$database; -- returns -2, 2 3) select trunc(987.65, 1), trunc(987.65, -1) from rdb$database; -- returns 987.60, 980.00
--> All the string-functions are now aware of the character set and collate. This is very important. UPPER translates now correct an ä to Ä, it can handle unicode correctly etc...
==> So all functions not implemented in CreateFunctionMappingStore() can be implemented now like trim, rtrim, ltrim, right, left, etcc..
- Common Table Expressions: CTE's, even recusrive ones (limited to a depth of 1024)
- MERGE statement
So Otis, you see, there is so much changed that the current implementation isn't very accurate anymore.
If you want I can already starting to improve it and send to changed code.
The most (critical/)important things are: * Make a difference between 1.5, 2.0 and 2.1, so legacy code will not be broken * Use the returning syntax if a table generates a PK through a trigger from versio 2.0.x * Use from 2.1 all built-in functions (just enhance the CreateFunctionMappingStore() method), firebird does offer enough to compete better with SQL-server/Oracle and other DB's
Danny,
We are registered on the company Peopleware NV Belgium.
Joined: 24-Jul-2009
Hi,
You can find almost anything I posted here in .\doc directory after you installed firebird.
If you install 2.1.3, then you get 2 pdf-files: Firebird_v1.5.5.ReleaseNotes.pdf Firebird_v2.1.3.ReleaseNotes.pdf
In each of these files are explained in detail which new features are implemented and when they were introduced. The .\doc directory contains also another directory sql.extensions. This directory is listed with text-files where each file is an explanation of one particular feature.
Example: Appendix A of Firebird_v2.1.3.ReleaseNotes.pdf, explains and list all new built-in functions.
Danny
I uploaded these files
dvdwouwe wrote:
But, the support for firebird 2.1 isn't that good:
It works OK, though there are some advanced things not supported as Firebird wasn't and still isn't supporting its own work very well. (Ok, the rowcount is a problem). Mind you, Firebird is the worst documented database I've ever ran into, and wading through a lot of textfiles to check whether something has 'changed' is not helpful, and there's no simple reference documentation. We've to maintain a lot of database specific code and it's already a struggle to get a clear picture of what is supported in which db, and within a db type in which version. If a DB makes it very hard to obtain this info (and firebird has done this for years), it's becoming problematic.
That said, your supplied info is of great help and we appreciate your time on this
Main things: From v2.0.x : * Support for derived tables ==> CreateRowCountDQ can be implemented ==> All Linq statements with derived tables should be supported (I think you already did that) ==> Aliases work completely now also for sub-queries (You already fixed that for me)
Rowcount could indeed be implemented now. Keep in mind that the compatibility setting is added but only to fix a bug which had no workaround. What we really don't want is a MySql kind of patchwork where every build is really a different version with different features. For example, there's also an in-code way to set the compatibility level like there is for sqlserver: we added Firebird15 and Firebird2x settings. Changing Firebird2x to Firebird20 would break code, and adding two new values would be awkward and we really hate changing settings already tacked on during a version AGAIN some build later.
About function mappings: you can add them yourself at the moment with custom function mappings passed to the linqmetadata ctor, if you find a function not supported. That could serve as a workaround for the time being, as most of your post is concentrated around that.
Following things should enhance CreateFunctionMappingStore() [...] * New [NOT] DISTINCT Test Treats Two NULL Operands as Equal: example:
Val1 is distinct from Val2
, is the same as
((val1 is null) and (val2 is not null)) or ((val1 is not null) and (val2 is null)) or (val1 <> val2)
I don't follow what this does or for what it is the mapping result of. Function mappings are about .NET methods/properties which are mapped to SQL constructs, and only those methods/properties of .NET CLR classes which are very common are mapped. so any mapping which isn't common isn't mapped in any DQE and has to be added through custom function mappings, which are very easy to do.
- Support for context variables. The context can be System, Session, Transaction: This things makes it possible to feed Triggers/stored procedures with additional data. For example:
UserModified = coalesce(rdb$get_context('USER_TRANSACTION', 'User_ID'), current_user)
. in web applications were probably always connections pooling is used and only one DB-user is created for all sessions, it is cumberstone in some engines to know in triggers/stored procedure who is behind the browser, this solves the problem very easily.
I think this too is a custom mapping of some sort, as I don't see a common .NET CLR method/property represented by this.
- RETURNING Clause for Insert Statements, this is very important regarding Sequences/generators. Now in the system we have an extra round trip to determine the surrogate PK key of the table if it a trigger assigned the key using a generator/sequence. BTW if between the insert and the
select gen_id(<generator>,0) from rdb$database
query another concurrent insert occurred you will NOT get the correct generator!!!
Isn't that only true when the same connection is used, as it is with all databases who use sequences? If not, to me sequences are broken in firebird, so I think it's per connection, otherwise a sequence value is always possible problematic. LLBLGen Pro doesn't execute multiple inserts over the same connection if there's already one busy so this will then never occur. (or sequences + triggers would never work with firebird 1.x). This only goes wrong if the trigger inserts new rows, using the SAME sequence and at the same time creating new values from the sequence. Why would a trigger do that?
This is why it is important to use from v2.0x the following syntax: insert into TableFoo(...) returning PKField into :MyPKVar. This is always save and no extra round trips are made. BTW the output clause in sql-server does the same but it can return more than one row. Implementing this is necessary for a high transactional database, it guarantees that you will have the correct PKField value (if generated by a trigger)
so the returning clause is only necessary if a) the trigger setting is set AND b) a sequence is used? But if it's not possible to get out-of-sync sequence values, I'd like to postpone this to v3 where we'll also change this for other databases which have similar extra roundtrips and which we can pack together in a single statement.
- EXECUTE BLOCK (You can send multiple SQL statements in one go, it isn't the same as a batch in SQL-server but it can be used to speed-up bulk inserts/updates) . It is like an anonymous stored procedure, its length is limited to 64K byte source code. For example you can write more then one insert statement
batching of statements isn't supported and it's not going to be supported in the near future, due to the way our code on top of the raw entity I/O works: there's a lot of functionality around entities available and batching statements together makes it impossible to perform the code for these features. Some time ago we looked into supporting this, but it turned out to be very complicated. There are tricks to batch statements, and for bulk inserts these can be leveraged and give some performance boost, however things like auditing, authorization validation etc. which are around 1 entity is then not really possible, or you've to them in batch as well, however that gives problems when hierarchies are persisted, so we didn't do this. Most databases have bulk insert systems anyway, which bypass the SQL engine completely and which are much faster than batched insert statements.
From v2.1.x : Built-in functions:
We can add those, however we then have to change the compatibility level setting again, as it has now 1.5 and 2.x values...
And I really don't want to add mid-term new features to the runtime, the setting added was already a big pain as it was added to fix a serious issue but it wasn't added the way it should be added as it would otherwise break code like I described above.
You've to understand that we're under a tight schedule to finish 3.0 and adding new features to 2.6 isn't our highest priority at the moment (bugfixes/trouble shooting always gets priority, though new features to an already released system isn't). I also have to rehash what I already said above: in the DQE we only map the common methods/properties and if you want to add additional mappings, it's very easy to do that using custom function mapping classes which you can pass to the LinqMetaData ctor.
(Snip long list of function descriptions, which is very handy, thank you for that)
- Common Table Expressions: CTE's, even recusrive ones (limited to a depth of 1024)
- MERGE statement
CTE's are only used in the SQLServer dqe for paging. As firebird has a much easier way of doing paging, we don't use cte's at all, and neither do we support MERGE.
So Otis, you see, there is so much changed that the current implementation isn't very accurate anymore.
The current implementation lets you work with a firebird database properly, there are some limitations when it comes to linq, which can be solved indeed, however calling it not very good is IMHO not really doing it justice.
If you want I can already starting to improve it and send to changed code.
If you could make the changes you want to see, that would be very much appreciated and very welcomed as we have not a lot of time to spend on new features for 2.6 as you probably understood.
There's a big problem though: the compatibility setting. At the moment there are two values, but you need three. While the config file approach isn't that problematic (just make '2' the new 2.1 compatibility setting), the code-approach has Firebird2x, and needs to be Firebird20. This breaks code again, and although we didn't document it much, I won't allow a change again on that. So the function mappings have to be added regardless of the compatibility setting.
The only problem that would give is that a 2.0 or 1.5 user would get function mappings in queries which then crash at runtime, but these queries would likely already crash at runtime but with different errors (like no mapping available). Custom mappings added by these users already take precedence over DQE supplied mappings so that's not a problem.
The most (critical/)important things are: * Make a difference between 1.5, 2.0 and 2.1, so legacy code will not be broken
That's not possible, there's only the current distinction: 1.5 and 2.x. I know, it sucks, though at the time of adding that there was no necessity for more fine grained compatibility levels. As this setting has to be moved to a different assembly in v3, it's going to break anyway in the future, but that's with a version change, not from build to build. As it's just for function mappings, I think it's best if the function mappings are added regardless of the compatibility level. This also gives the advantage of using the mappings if the in-code compatibility setting is used because the mappings are added in the static ctor and that one runs before anything else.
- Use the returning syntax if a table generates a PK through a trigger from versio 2.0.x
please confirm if this is even necessary, as I doubt it's possible to have multiple inserts with the same sequence on the same connection.
I'd like to add that we appreciate the time you're willing to spend on this. If you need help from us regarding the changes you're planning, please let us know at once through this thread. Thanks
Joined: 24-Jul-2009
Hi Otis,
About the gen_id function:
gen_id isn't running inside a DB transaction, this means that:
if you insert a record and this table generates a surrogate PK using this kind of trigger
CREATE OR ALTER TRIGGER FOO_BI FOR FOO
ACTIVE BEFORE INSERT POSITION 0
AS
BEGIN
IF ( (NEW.ID IS NULL) OR (NEW.ID <= 0 ) )THEN
NEW.ID = GEN_ID(GENID_FOO, 1);
END
And after the insert you will issue following select-statement:
select gen_id(genid_foo,0) from rdb$database;
You get at that moment the current generator-value. So, if in between the time of your insert and the select a concurrent session had inserted also a record, you will get the generator value of that LAST inserted record. In sql-server you had something similar: --> @@identity, this function did give you a wrong value if triggers did also an insert into another table with identities.
So, this means that you have only two secure options:
1) select first the next generator value and use this value to insert the record (this option is also available in LLBLGen Pro) 2) Or, use the "returning clause" too get the correct PK if the triggers generate a new surrogate key.
Otis wrote:
Mind you, Firebird is the worst documented database I've ever ran into, and wading through a lot of textfiles to check whether something has 'changed' is not helpful, and there's no simple reference documentation
Because I follow this DB for a long time, I maybe doesn't have an objective way to say that the documentation isn't good.
But: - You have an excellent book that really covers the whole thing: * The Firebird book of Hellen Borrie (covers completely 1.5), there are addendum's for the next version 2.0.x and 2.1.x. - The release notes are a great source of information, It shouldn't be called release notes anyway. They cover every new feature in these notes. - And last but not least, the official forums for firebird are great, immediate response and solutions -There are online reference documentation for each version: http://firebirdsql.org/refdocs/langrefupd15.html http://firebirdsql.org/refdocs/langrefupd20.html http://firebirdsql.org/refdocs/langrefupd21.html
I understand that your effort is now going to the release v3.0, but I really don't understand why each DB doesn't have a version compatibility mode like the sql-server version has. This makes other developers easier to enhance the product.
BTW You already broke some things in the CreateFunctionMappingStore() method: new FunctionMapping(typeof(Array), "get_Length", 0, "OCTET_LENGTH({0})") new FunctionMapping(typeof(string), "ToLower", 0, "LOWER({0})") new FunctionMapping(typeof(string), "Trim", 0, "TRIM({0})") Is functionality of version v2.0.x not v1.5.x
Kind regards, Danny
dvdwouwe wrote:
Hi Otis,
About the gen_id function:
gen_id isn't running inside a DB transaction, this means that:
if you insert a record and this table generates a surrogate PK using this kind of trigger
CREATE OR ALTER TRIGGER FOO_BI FOR FOO ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF ( (NEW.ID IS NULL) OR (NEW.ID <= 0 ) )THEN NEW.ID = GEN_ID(GENID_FOO, 1); END
And after the insert you will issue following select-statement:
select gen_id(genid_foo,0) from rdb$database;
You get at that moment the current generator-value. So, if in between the time of your insert and the select a concurrent session had inserted also a record, you will get the generator value of that LAST inserted record. In sql-server you had something similar: --> @@identity, this function did give you a wrong value if triggers did also an insert into another table with identities.
So, this means that you have only two secure options:
1) select first the next generator value and use this value to insert the record (this option is also available in LLBLGen Pro) 2) Or, use the "returning clause" too get the correct PK if the triggers generate a new surrogate key.
Ah, we weren't aware of that tiny detail, we assumed it was as it's done in oracle / postgresql for example, where sequences are completely atomic, and it's impossible to obtain a value from another transaction's sequence usage.
So to recap: when the 'trigger' flag is set, the generator value has to be obtained before the insert query if compatibility mode is 15, and a returning clause has to be used if compatibility mode is 2x?
Otis wrote:
Mind you, Firebird is the worst documented database I've ever ran into, and wading through a lot of textfiles to check whether something has 'changed' is not helpful, and there's no simple reference documentation
Because I follow this DB for a long time, I maybe doesn't have an objective way to say that the documentation isn't good. But: - You have an excellent book that really covers the whole thing: * The Firebird book of Hellen Borrie (covers completely 1.5), there are addendum's for the next version 2.0.x and 2.1.x. - The release notes are a great source of information, It shouldn't be called release notes anyway. They cover every new feature in these notes. - And last but not least, the official forums for firebird are great, immediate response and solutions -There are online reference documentation for each version: http://firebirdsql.org/refdocs/langrefupd15.html http://firebirdsql.org/refdocs/langrefupd20.html http://firebirdsql.org/refdocs/langrefupd21.html
When you know what vA supports and you look at release notes of what vA+1 supports, then they might be great. However for looking up what is supported, it's crap, and I would actually write that in caps. The main problem is that we've to look at docs differently: we've to know IF some feature is supported, and if it is supported, how it is supported, and if possible if it's introduced in some version or that it was supported from the get go. Every database we work with has a reference documentation which contains all this info. Except firebird. For firebird we therefore have to read a lot of docs and mix and match them manually, which is tedious and error prone.
So we try to do our best but there's a limit to what we can do for every database. For example we're not Oracle consultants who know ever bit of every system table or every system stored proc and ever side effect. In 99.9% of the cases thats perfectly fine, but in some edge cases it's not. So if a customer (e.g. you in this case) tells us dedicated what the details are, all the better so we can adapt our code to these details.
I understand that your effort is now going to the release v3.0, but I really don't understand why each DB doesn't have a version compatibility mode like the sql-server version has. This makes other developers easier to enhance the product.
When we released v1 6 years ago, there was no setting, as there was no sqlserver 2005 and all databases we supported for quite some time simply supported all the features. When sqlserver 2005 came along, and it was better to do the paging on sqlserver 2005 with CTE's instead of temptables, we added the compatibility setting, to make sure people could use the same code on 7/2000 and 2005. With ansi joins on oracle, it was different as the joins are generated outside the DQE. This means that the code generating the joins didn't know what a compatibility mode was, it just knew it had to join non-ansi.
All other databases have no need for a compatibility mode as all features are supported on all db versions. So adding a compatibility mode is then a bit odd, as it wouldn't be used.
To be honest, I didn't expect firebird to come up with a database which would ever support functions like other databases have done for many years.
BTW You already broke some things in the CreateFunctionMappingStore() method: new FunctionMapping(typeof(Array), "get_Length", 0, "OCTET_LENGTH({0})") new FunctionMapping(typeof(string), "ToLower", 0, "LOWER({0})") new FunctionMapping(typeof(string), "Trim", 0, "TRIM({0})") Is functionality of version v2.0.x not v1.5.x
True, this is a result of this remark in the getting started page of Linq to LLBLGen Pro documentation:
Linq requires .NET 3.5. Linq to LLBLGen Pro produces queries which can rely on a feature called Derived tables, which means a separate query in the FROM clause of the SELECT statement. Not all databases supported by LLBLGen Pro support this feature: Firebird 1.x and SqlServer CE Desktop 3.1 or earlier don't support derived tables. It's not recommended to use Linq to LLBLGen Pro on these databases, use a version of the database which is newer (Firebird 2.x / SqlServer CE Desktop 3.5).
I.o.w. we don't recommend using linq to llblgen pro on firebird 1.5 due to severe limitations. We did add these mappings as those were the ones we could find for 2.0, the one available when llblgen pro 2.6 shipped in june 2008 (or at least, it was in beta I think).
That firebird got its act together later on with 2.1 is great, though we couldn't possibly know that in v2.1 it would be fixed (e.g. it also could have been in firebird 2.2 or 2.5, a compatibility setting would then be problematic as well. Enums can't be extended in .net unfortunately).
Joined: 24-Jul-2009
Hi Otis,
Otis wrote:
Ah, we weren't aware of that tiny detail, we assumed it was as it's done in oracle / postgresql for example, where sequences are completely atomic, and it's impossible to obtain a value from another transaction's sequence usage.
Sequences are treated as a singleton structure in firebird, let me explain again: - Every DML statement in firebird is running in a transaction so the isolation level determine the allowed anomaly, but setting a sequence is done outside a transaction to prevent giving a duplicate value's. This means implicit that you can have gaps (a rollback doesn't mean that the fetched sequence can be reused, because again it didn't run inside a transaction, this is also the way Oracle works)
start transaction session #1 select gen_id(genid_foo,1) from rdb$database rollback work
start transaction session #2 select gen_id(genid_foo,1) from rdb$database commit work
session #1 and session #2 overlaps, but they fetch different numbers, conclusion they don't execute in a transaction context.
BUT, probably in Oracle / postgres (I don't know there implementation, i m guessing here) they will cache the last used selected/generated sequence per session, too prevent the situation I explained before that can happen whenh concurrent inserts in different sessions using the same sequence.
So, when you fetch the sequence in those environments they will probably look in the session cache first and probably second in the main sequence storage.
In firebird gen_id(genid_foo,0) always looks in the main sequence storage, what mains that you will fetch the current sequence at the time the query executes.
Otis wrote:
So to recap: when the 'trigger' flag is set, the generator value has to be obtained before the insert query if compatibility mode is 15, and a returning clause has to be used if compatibility mode is 2x?
Yes, when the trigger flags is set, selecting first the sequence value is working correctly in all versions. But, when the trigger flags is set, selecting the sequence afterwards isn't save in all versions and you should use the returning clause, which is only working from 2.0.x and up.
I hope I'm clear, because English isn't my native language to speak.
Danny
dvdwouwe wrote:
Hi Otis,
Otis wrote:
Ah, we weren't aware of that tiny detail, we assumed it was as it's done in oracle / postgresql for example, where sequences are completely atomic, and it's impossible to obtain a value from another transaction's sequence usage.
Sequences are treated as a singleton structure in firebird, let me explain again: - Every DML statement in firebird is running in a transaction so the isolation level determine the allowed anomaly, but setting a sequence is done outside a transaction to prevent giving a duplicate value's. This means implicit that you can have gaps (a rollback doesn't mean that the fetched sequence can be reused, because again it didn't run inside a transaction, this is also the way Oracle works)
start transaction session #1 select gen_id(genid_foo,1) from rdb$database rollback work
start transaction session #2 select gen_id(genid_foo,1) from rdb$database commit work
session #1 and session #2 overlaps, but they fetch different numbers, conclusion they don't execute in a transaction context.
BUT, probably in Oracle / postgres (I don't know there implementation, i m guessing here) they will cache the last used selected/generated sequence per session, too prevent the situation I explained before that can happen whenh concurrent inserts in different sessions using the same sequence.
So, when you fetch the sequence in those environments they will probably look in the session cache first and probably second in the main sequence storage.In firebird gen_id(genid_foo,0) always looks in the main sequence storage, what mains that you will fetch the current sequence at the time the query executes.
That's what I indeed understood. Oracle / postgres use a central store as well, but NEXTVAL/CURRVAL are session/connection specific. So if two connections/sessions request NEXTVAL of the same sequence they get, atomically, a next value which is globally unique (thus similar as firebird) but 'CURRVAL' is the last obtained value of that sequence in that session/connection so that's the difference, it will not return another session/connection's value, always the one obtained in that same session.
Otis wrote:
So to recap: when the 'trigger' flag is set, the generator value has to be obtained before the insert query if compatibility mode is 15, and a returning clause has to be used if compatibility mode is 2x?
Yes, when the trigger flags is set, selecting first the sequence value is working correctly in all versions. But, when the trigger flags is set, selecting the sequence afterwards isn't save in all versions and you should use the returning clause, which is only working from 2.0.x and up.
Indeed, we understood eachother then
I hope I'm clear, because English isn't my native language to speak. Danny
If you can't describe it in english, dutch is fine (but I'm not sure if you're speaking dutch or french )
What is your proposal to go from here? We have little time reserved for features added to v2.6, however the sequence stuff is a bug, so we'll do that for you, the same goes for the rowcount. Adding the functions could be done on a per-use case by you in a custom function mapping store you create centrally in your project, and when you're done eventually share it with other customers here later on (if v3 isn't available by then where we've added them as well). This would not place a pressure on you to write all the mappings (as you probably don't need all of them, just the string-related ones I think) and it doesn't cut in our time budget for changes to v2.6 other than bugfixes.
Joined: 24-Jul-2009
Otis,
At the moment I'm using it with SQL-server 2008, but probably we have a rather big project coming on Firebird v2.1/v2.5 within 2 months.
Concentrate on your next major version 3.0 to implement the new features correctly ! Solving the bug (using the returning clause) is the only thing you can do for v2.6.
Danny
The trigger flag is meant to be used when the trigger obtains a new sequence value. This means that obtaining a value before the query when the flag is set in v1.5 isn't going to work. I.o.w.: in firebird 1.5 with triggers, it's not going to be safe. The DQE already executes the sequence obtaining query before the main query so without setting the trigger flag, it's already doing the right thing.
For 2x compatibility setting and trigger flag set, the returning clause should be used instead of the sequence retrieval query. So that's the change I'll make to the DQE for sequence retrieval.
I can't get it to work. The query
INSERT INTO "CUSTOMER" ("CUST_NO", "CUSTOMER", "CONTACT_FIRST", "CONTACT_LAST", "ADDRESS_LINE1", "CITY") VALUES (GEN_ID(CUST_NO_GEN, 0), @Customer, @ContactFirst, @ContactLast, @AddressLine1, @City) RETURNING "CUST_NO" INTO @CustNo1
fails with the error:
Dynamic SQL Error SQL error code = -104 Token unknown - line 1, char 178 ?.
(Executed on FB 2.0)
This seems logical: ref: http://firebirdsql.org/refdocs/langrefupd20-insert.html
The “INTO <variables>” subclause is only available in PSQL.
If I had looked up what 'psql' meant before I made these changes, I wouldn't have spend the time on this. Question now is: what happens with the values if INTO ... isn't emitted? It seems that there's a resultset created, however how to obtain that from an ExecuteNonQuery without parameters is beyond me as it's not documented.
I'll see if the output parameters are filled in the same order... after all, firebird uses output parameters for resultset returning
(edit) ok, the returned values are stored in the output parameters, in the same order. as the fields + parameters are added in sync, this order is correct. as the insert query itself has no output parameters other than the identity field parameters, this works OK.
test succeeds now
See the attached DQE for the fix for firebird DQE's sequence mechanism for 2x compatibility. This also implements GetDbCount() properly.
I didn't add function mappings, because although some are similar to Sqlserver's, the problem is that 2.0 users will all of a sudden sometimes get a db function mapping instead of an in-memory mapping like with function calls in the projection. So this isn't done today, as it requires a change in the compatibility enum, which is scheduled for 3.0
Filename | File size | Added on | Approval |
---|---|---|---|
SD.LLBLGen.Pro.DQE.Firebird.NET20.dll | 36,864 | 16-Sep-2009 13:49.38 | Approved |