Home
Help
Register
Log in

Search

 
   Active Threads  

You are here: Home > LLBLGen Pro > LLBLGen Pro Runtime Framework> Code first?
 

Pages: 1
LLBLGen Pro Runtime Framework
Code first?
Page:1/1 

  Print all messages in this thread  
Poster Message
Ian
User



Location:
Hertfordshire, UK
Joined on:
01-Apr-2005 16:37:36
Posted:
511 posts
# Posted on: 14-Jun-2011 13:36:39.  
Hi there,

I was wondering how feasible it would be to have a code first version of LLBLGen.

Cheers, Ian.
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
37869 posts
# Posted on: 14-Jun-2011 14:05:05.  
for our own framework: no, that's not happening, as our framework isn't a poco framework, so you'd have to write a lot of code (which is now generated). In the next version we do plan to support EF v4.1 code first.

Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Ian
User



Location:
Hertfordshire, UK
Joined on:
01-Apr-2005 16:37:36
Posted:
511 posts
# Posted on: 15-Jun-2011 15:10:25.  
With EF Code First one can put extra db configuration (e.g. unique indexes, values in a look up table) in a db initializer class that gets run after a new db has been created. Where would one put this extra stuff when using LLBLGen? Is there somewhere within the project that one can add SQL scripts or perhaps reference scripts to be run upon a db generation?

Also, with code first, one's database is always going to be in synch with the code because the db comes from the code. How can I make sure that my LLBLGen projetc is in synch with the generated code?
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
37869 posts
# Posted on: 15-Jun-2011 16:19:12.  
Ian wrote:
With EF Code First one can put extra db configuration (e.g. unique indexes, values in a look up table) in a db initializer class that gets run after a new db has been created. Where would one put this extra stuff when using LLBLGen? Is there somewhere within the project that one can add SQL scripts or perhaps reference scripts to be run upon a db generation?

The designer exports DDL SQL scripts, you add them there, as typically you need those as a starting point for the real db creation / maintenance / migration.

Quote:
Also, with code first, one's database is always going to be in synch with the code because the db comes from the code. How can I make sure that my LLBLGen projetc is in synch with the generated code?

by refreshing the catalog? LLBLGen Pro works from the designer, with the abstract model. Code and tables are a derivative from that, not a starting point. So ideally, you manipulate the model, which results in changes in the tables (exported as DDL SQL, so you can create migration scripts, which is key for changes made after your initial start) and code (when you re-generate).


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Ian
User



Location:
Hertfordshire, UK
Joined on:
01-Apr-2005 16:37:36
Posted:
511 posts
# Posted on: 15-Jun-2011 18:13:16.  
OK. I think it would be cool if you could run DDL SQL scripts directly from the designer. So one generation run could update the code and the db and you'd never see the DDL SQL.

Quote:
by refreshing the catalog?


After having used Code First doing stuff like this all feels a bit manual!
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
37869 posts
# Posted on: 16-Jun-2011 11:04:57.  
Ian wrote:
OK. I think it would be cool if you could run DDL SQL scripts directly from the designer. So one generation run could update the code and the db and you'd never see the DDL SQL.

that's a fantasy which only works in the first week of development. After a while, with more developers, with a central DB to test on, heck, after the first release and you have to add more features (most developers work on maintenance instead of new stuff!), you can't simply 'update the db', as you need migration of what's there. Hence the scripts. If you have a DBA, s/he wants to see the scripts and / or wants to use them to setup tests to see whether migration works OK.

Quote:

Quote:
by refreshing the catalog?

After having used Code First doing stuff like this all feels a bit manual!

In the beginning... sure. But after a few iterations, more people on the team, do some maintenance work, you'll see it will actually get in your way and restrict you.

That's what I find so annoying with the current 'hype' driven communications on the internet these days, with respect to development: people only focus on when they start a new project. But most developers do maintenance work on existing software, they don't start from scratch. And even if your project is new, if the project is on its way for a long time already, it's coming down to maintenance work, you can't simply 'write some code' and 'everything is taken care of'. Well.. you could of course, but chances are it will end up in tears when lots of data is put into the model or things have to be changed after release.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Ian
User



Location:
Hertfordshire, UK
Joined on:
01-Apr-2005 16:37:36
Posted:
511 posts
# Posted on: 16-Jun-2011 12:52:27.  
Quote:
After a while, with more developers, with a central DB to test on, heck, after the first release and you have to add more features (most developers work on maintenance instead of new stuff!), you can't simply 'update the db', as you need migration of what's there. Hence the scripts.


This wasn't my experience when using Code First recently. When developing I could change the model however I liked and run a unit test that generated a new database. There didn't appear to be a need for a central DB to test on.

When it came time to update a previous release then yes, things got a bit more tricky because a detailed migration was needed. But during development, "write some code and everything is taken care of" appeared to work extremely well.
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
37869 posts
# Posted on: 16-Jun-2011 13:41:01.  
... in isolation. When multiple branches of development come together, you need a central DB to test the stuff on. Have fun migrating that with code first Wink Sure you can hose the DB from scratch, but maintenance is the major part of an application lifetime, and a lot of developers spend most of their time in that timeframe on existing code.

When you started with codefirst, how are you going to migrate to non-code first? How are migrations taking place and best of all: how are changes justified? Because remember: just because a database can be created from code doesn't mean it's the right model. The entity model is what's the source, not the code. Developers like to think in code and assume what they're typing in is the source of all good and joy, but that's just a fantasy: the code one writes in code first isn't the source of the definition, but a projection result of a model onto code. Making changes in that code therefore is similar to making changes in the IL of a compiled C# program and then expecting the C# code to be updated.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Ian
User



Location:
Hertfordshire, UK
Joined on:
01-Apr-2005 16:37:36
Posted:
511 posts
# Posted on: 16-Jun-2011 14:35:14.  
Quote:
When multiple branches of development come together, you need a central DB to test the stuff on.


There's a central code repository and from that everyone can generate their own local database instance. I would hazard a guess that the only reason teams typically use a central DB to test against is because setting up a local instance is so fidly. But of course sharing a DB means that one person can break it for the whole team.

I don't think it matters if project has just been started or is under maintenence: one needs an up to date database instance to work with so one generates one with a click of a button rather than playing around with exporting SQL scripts.

Quote:
The entity model is what's the source, not the code.


Which means LLBLGen could be doing a better job of this than EF Code First! I play around with the model in the designer, press a button and there's my new db instance and my new corresponding code...

I'm getting a bit lost here. Laugh I'm just suggesting that LLBLGen should make it easy for a developer to generate/update a db instance from the model. Migrating changes to a live db is a seperate issue that's going to be there regardless and would most likely be done manually.
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
37869 posts
# Posted on: 17-Jun-2011 11:20:46.  
Ian wrote:
Quote:
When multiple branches of development come together, you need a central DB to test the stuff on.

There's a central code repository and from that everyone can generate their own local database instance. I would hazard a guess that the only reason teams typically use a central DB to test against is because setting up a local instance is so fidly. But of course sharing a DB means that one person can break it for the whole team.

Isn't the end system going to be run on a single DB? You can't always have just a DB with your tables, you will have to use other people's tables as well at some point.

Quote:

I don't think it matters if project has just been started or is under maintenence: one needs an up to date database instance to work with so one generates one with a click of a button rather than playing around with exporting SQL scripts.

If your DB contains millions of rows of data, creating a new schema is not your problem, migrating the existing data to the new schema is. Hence the scripts, so you can use them to first test the migration (by adding code to the scripts so they make the migration work) and then do the migration in production.

I don't think you fully grasp the problems related to migrating production databases to a slightly different schema. Adding a table or two, that's easy. Adding new tables which will contain data from an existing table, tables which are split up, new relationships which are added which make current data invalid... those are different problems, all very real in a development process which has to deal with real data.

Quote:

Quote:
The entity model is what's the source, not the code.

Which means LLBLGen could be doing a better job of this than EF Code First! I play around with the model in the designer, press a button and there's my new db instance and my new corresponding code...

I'm getting a bit lost here. Laugh I'm just suggesting that LLBLGen should make it easy for a developer to generate/update a db instance from the model. Migrating changes to a live db is a seperate issue that's going to be there regardless and would most likely be done manually.

The whole point of having a new shiny DB is just a small fraction of what most people need. We generate the scripts to make sure people understand what's going on, they can make last minute changes (e.g. change collation, add defaults). Just 'create a new catalog' is perhaps nice, but not very practical for most people. Hence, the scripts.

My point is that code-first is IMHO a wrong way of doing database oriented development: it doesn't focus on what's really important. Instead of lets the developer simply write code and 'how these objects are persisted is taken care of'. That's not going to work. You can't abstract away a database.

This has nothing to do with you, I just get annoyed in general about 'code first' and 'code oriented persistence'. It's nonsense: people will only end up with sub-optimal systems which will sooner or later run into a wall. The sad thing is that most people starting these projects have moved on to other projects by then, leaving the maintainers with sub-optimal systems, which are hard to modify: refactoring the code has big impact on the tables, migrating the data (as it's in production!) will be cumbersome. Code first simply doesn't embed a single piece of 'maintainability', on the contrary. What I'd like to see is that people realize that code first doesn't free you from designing with maintainability in mind.

The scripts generated are part of that: you get a script. You have to read it. It's a physical piece of code you can use to build the schema to the point where you need it to be. As it's code you can optimize it, change it, and even throw (parts of) it away if you have a better idea for it. Store them in source control, also the update scripts.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Ian
User



Location:
Hertfordshire, UK
Joined on:
01-Apr-2005 16:37:36
Posted:
511 posts
# Posted on: 17-Jun-2011 16:18:35.  
Quote:
I just get annoyed in general about 'code first' and 'code oriented persistence'. It's nonsense: people will only end up with sub-optimal systems which will sooner or later run into a wall.


Why is it OK to have an OR mapper generate potentially sub-optimal SQL queries but not OK for one to generate potentially sub-optimal schemas?
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
37869 posts
# Posted on: 17-Jun-2011 16:42:31.  
Ian wrote:
Quote:
I just get annoyed in general about 'code first' and 'code oriented persistence'. It's nonsense: people will only end up with sub-optimal systems which will sooner or later run into a wall.


Why is it OK to have an OR mapper generate potentially sub-optimal SQL queries but not OK for one to generate potentially sub-optimal schemas?

It's not ok to have sub-optimal SQL queries: there should be ways to optimize the query from code, so multiple ways to write the same query. This is also why we went through great lengths to provide query systems which can do that. Linq is not suitable for this, however our new queryspec and also our lower level api are. Regular Smiley

But let's say it's acceptable to have sub-optimal sql queries. It's not acceptable to have sub-optimal schemas because databases tend to live longer than the application which was initially build for them. (the phrase 'legacy database needs new application' is often heard more than 'legacy application, needs new database'). Additionally, the schema gives meaning to the data. If you need the code to give meaning to the data, you are in a bit of trouble, if the application goes away or is refactored.

That's also why the schema should reflect reality, i.e. the abstract entity model, so be a projection of it. Code which is a projection of it should reflect reality too, however as Code is also the place for functionality, it can be refactored, to make it more usable for developers, to develop with.

In that light if you create a code base which is tailored towards easy development, pure OO etc. (just coining a few aspects) the schema reflecting that code structure isn't ideal, at least not for the data if you simply look at that separately from the code.

Not every project suffers from this of course. Sometimes data is outlived by the application, and the database will surely be replaced if the application will go away. But it's always key to keep in mind that it might stick around longer than the application e.g. because the data (together with the schema -> information) is likely key for the survival of the organization.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Ian
User



Location:
Hertfordshire, UK
Joined on:
01-Apr-2005 16:37:36
Posted:
511 posts
# Posted on: 17-Jun-2011 18:07:34.  
Quote:
It's not ok to have sub-optimal SQL queries: there should be ways to optimize the query from code, so multiple ways to write the same query. This is also why we went through great lengths to provide query systems which can do that. Linq is not suitable for this, however our new queryspec and also our lower level api are.


Why not provide the same flexibility for schema generation to avoid sub-optimal schema generation? Code First does attempt to do this with its Fluent API.

Quote:
In that light if you create a code base which is tailored towards easy development, pure OO etc. (just coining a few aspects) the schema reflecting that code structure isn't ideal, at least not for the data if you simply look at that separately from the code.


Yes I agree that code is not necessarily the best place to describe a model but I think that LLBLGen's new model first design is resulting in a similar abstraction of the database that EF Code First does. So surely the same criticisms being leveled at code first can be leveled at model first?

I think this is what I'm stuck on... given your insistence that the db should be managed at the SQL level, what is LLBLGen's model designer for? My assumption was that it was to generate a db in the same way that one might do with Code First but clearly I'm missing the intention here.

If I use the designer to create entities and then export SQL and optimize it then isn't there a risk of the db tables getting out of synch with the generated entities? I'd feel safer just sticking to the traditional work flow of working with Management Studio and then reflecting the entities off of the db.

And if one is going to modify the exported SQL then won't the model stored in the LLBLGen project become out of date and irrelevant?
  Top
Otis
LLBLGen Pro Team



Location:
The Hague, The Netherlands
Joined on:
17-Aug-2003 18:00:36
Posted:
37869 posts
# Posted on: 20-Jun-2011 17:58:31.  
Ian wrote:
Quote:
It's not ok to have sub-optimal SQL queries: there should be ways to optimize the query from code, so multiple ways to write the same query. This is also why we went through great lengths to provide query systems which can do that. Linq is not suitable for this, however our new queryspec and also our lower level api are.


Why not provide the same flexibility for schema generation to avoid sub-optimal schema generation? Code First does attempt to do this with its Fluent API.

I don't really follow what this has to do with the quote Wink. Where is our DDL SQL suboptimal btw? Code-first can do this because it keeps track of stuff in a table. This is in our eyes not acceptable : schemas of databases shouldn't contain tool data.

Quote:
Quote:
In that light if you create a code base which is tailored towards easy development, pure OO etc. (just coining a few aspects) the schema reflecting that code structure isn't ideal, at least not for the data if you simply look at that separately from the code.

Yes I agree that code is not necessarily the best place to describe a model but I think that LLBLGen's new model first design is resulting in a similar abstraction of the database that EF Code First does. So surely the same criticisms being leveled at code first can be leveled at model first?

Not really. An entity can result in 2 classes or more: other classes might be generated because of the entity. Model first solely focuses on the entity, not on what code is generated, that's defined in / with the templates. Using a poco template set results in different classes / inheritance hierarchies than when you're using non-poco templates (as these have a base class in some assembly)

Quote:

I think this is what I'm stuck on... given your insistence that the db should be managed at the SQL level, what is LLBLGen's model designer for? My assumption was that it was to generate a db in the same way that one might do with Code First but clearly I'm missing the intention here.

My insistence isn't about managing the DB on the SQL level, it should be managed on the entity model level. DDL SQL scripts are a tool to migrate the DB to reflect the new schema. As DDL SQL scripts in practice work best when migrating schemas (you can simply run them, or you can use them as a base for a larger migration plan, that's up to you), you get the DDL SQL scripts.

Quote:

If I use the designer to create entities and then export SQL and optimize it then isn't there a risk of the db tables getting out of synch with the generated entities? I'd feel safer just sticking to the traditional work flow of working with Management Studio and then reflecting the entities off of the db.

That's not the same thing: creating tables by hand is what the designer does for you: you project the entity (defined in your head) onto a schema resulting in a table. Then reverse engineer the table back to an entity definition which, if everything went OK, reflects the entity definition in your head. This is OK, but it depends on what you have in your head is still there tomorrow and doesn't change when you're working on it.

Optimizing the DDL SQL is not really what's the point of it. The point is that you can use the script to overview what has to be done, if you have to do it in multiple steps (break up the script), migrate data first (add sql to the script), change collations, add default constraints and above all: test it Regular Smiley You can re-run ddl sql scripts as much as you like in test situations, send them to other people to run them on their systems etc.

Quote:

And if one is going to modify the exported SQL then won't the model stored in the LLBLGen project become out of date and irrelevant?

If you remove fields, add fields etc. that might be the case yes. The script is there for updating your DB. Adding additional definitions is perhaps not a good idea, unless you know what you're doing. The same is true when you export a script from a schema diff tool, you can add what you want, but removing tables/fk's fields etc. has obvious consequences: for those type of actions, the script isn't the right place.


Frans Bouma
LLBLGen Pro / ORM Profiler Lead Developer | Blog | Twitter
 
Top
Pages: 1  


Powered by HnD ©2002-2007 Solutions Design
HnD uses LLBLGen Pro

Version: 2.1.12172008 Final.