Using LLBLGen Pro and WCF

Posts   
1  /  2
 
    
Posts: 13
Joined: 21-Nov-2006
# Posted on: 21-Nov-2006 15:34:23   

Using LLBLGEN Pro 2.0 with 2.0.50727 runtime version. Generating .Net 2.0 code in adapter mode.

I have created a WCF service contract that mirrors the DataAccessAdapter interface. The goal was to run our data access layer with an abstracted transport, which WCF provides beautifully. I have worked through alot of issues, and I think this is very possible, although it would require some help from the good folks at Solution Designs. I would be willing to share everything we do with you guys.

Now the problem. Some of the LLBLGen Pro support classes contain internal references to System.Type (via the concrete System.RuntimeType), although System.Type is marked as Serializable, the data contract serializer cant handle it.

See this thread I started on the MSDN support forum for more informatioin http://forums.microsoft.com/msdn/ShowPost.aspx?postid=942730&siteid=1

Is it possible to remove the references to System.Type in the support classes? Or would there be another option?

Thanks,

Casey Manus

Posts: 13
Joined: 21-Nov-2006
# Posted on: 21-Nov-2006 16:35:56   

Specifically the problem is when I try to pass a RelationPredicateBucket and it gets the the FieldInfo.DataType property. I am trying to create the FetchEntityCollection method when I run into this issue. Any chance we could remove that Property and replace it with a string combination to facilitate loading the type when needed or to make FieldInfo IXmlSerializable?

-Casey

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39612
Joined: 17-Aug-2003
# Posted on: 22-Nov-2006 10:54:47   

That's not possible as these objects are a fundamental part of the inner workings of the o/r core. What I wonder is why you're doing the webservice communication at such a low level? Why not utilize the service at a very high level, i.e. on the functionality level. ?

Frans Bouma | Lead developer LLBLGen Pro
jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 22-Nov-2006 12:01:13   

Otis wrote:

That's not possible as these objects are a fundamental part of the inner workings of the o/r core. What I wonder is why you're doing the webservice communication at such a low level? Why not utilize the service at a very high level, i.e. on the functionality level. ?

Hi Frans,

Does that mean we can't have a webservice method along the lines?

EntityCollection<OrderEntity> GetOrders(RelationPredicateBucket filter)

Jascha

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39612
Joined: 17-Aug-2003
# Posted on: 22-Nov-2006 12:44:57   

jaschag wrote:

Otis wrote:

That's not possible as these objects are a fundamental part of the inner workings of the o/r core. What I wonder is why you're doing the webservice communication at such a low level? Why not utilize the service at a very high level, i.e. on the functionality level. ?

Hi Frans,

Does that mean we can't have a webservice method along the lines?

EntityCollection<OrderEntity> GetOrders(RelationPredicateBucket filter) Jascha

Correct, as RelationPredicateBucket isn't serializable to XML, it is over remoting. So if you're planning to use remoting, it will work OK, with XML webservices, it won't, you've to create a service which produces the filter in the method based on input: something which is propagated as the way webservices should be used: message-based high-level services. (according to vasters, box etc. )

Frans Bouma | Lead developer LLBLGen Pro
jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 22-Nov-2006 18:28:49   

Otis wrote:

Correct, as RelationPredicateBucket isn't serializable to XML, it is over remoting. So if you're planning to use remoting, it will work OK, with XML webservices, it won't, you've to create a service which produces the filter in the method based on input: something which is propagated as the way webservices should be used: message-based high-level services. (according to vasters, box etc. )

Frans,

We meet again over the services debate!

So if I want to create a webservice method that provides a flexible way of defining the filter criteria when requesting an entity graph then I will end up creating something like the RelationPredicateBucket (but serializable) that I then translate at the server into a RelationPredicateBucket. That sounds like unnecessary work and wheel reinvention to me!!!

I have to confess to feeling like a heretic when it comes to webservices and soa in general! There seems to be a puritanical view that a service should be interoperable, decoupled, high-level, message-based and so on as if there is only one right answer. But we all know from experience that there are infinite problems, many "right" answers, no black and white issues but shades of grey.

Quite often, the only additional problem a distributed system introduces is the fact that the data we require access to is at the other end of a potentially slow, high-latency, less reliable link. An effective solution to that additional problem is to ensure the communication is done in a way that is stateless and minimises round trips and data volumes. Other than that, the problem is the same as if the data were local and I would argue that in all other respects the solution should stay the same too.

So let's return to a carefree and happy world where we pass entities and RelationPredicateBuckets over the wire and spend the afternoon relaxing while others toil duplicating logic at the client and coding dto'swink

Please excuse my rant - I have to go now - there is an angry crowd of soa evangelists gathering outside my office...

Jascha

mihies avatar
mihies
User
Posts: 800
Joined: 29-Jan-2006
# Posted on: 23-Nov-2006 10:20:17   

First question would be: But do you really need a webservice?

jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 23-Nov-2006 12:50:00   

mihies wrote:

First question would be: But do you really need a webservice?

Hi mihies,

That's a good question at the moment - while the gap between remoting and webservices is still relatively small particularly if you host remoting under iis. However the momentum behind webservices (WSE-*) is widening that gap and I would not be surprised if the arguments for webservices over "competing" technologies soon become ovewhelming - remember DCOM?

But for me the real issue is the assumption that there is only one "right" way to do webservices and service orientation in general may lead to unnecessary technological restrictions. Webservices aside, it is commonly accepted that long-running locking transactions at the database level are a bad design but that may not be the case for a single user application. Retrieving a million entities from a database is often seen as a bad design but that may not be the case in a batch processing scenario and so on. In these two examples, the key point is that the developer is free to do these things if they believe they are appropriate and sometimes they are. So my point is that these assumptions about how to "do" service orientation may well represent excellent guidance for good implementation patterns but they should not be enforced technologically. I.e. they are best practice guidelines not rules and the developer is ultimately responsible for applying them when appropriate.

I suspect I am preaching to a rather small audience so I will sign off now!

Jascha

mihies avatar
mihies
User
Posts: 800
Joined: 29-Jan-2006
# Posted on: 23-Nov-2006 14:41:28   

jaschag wrote:

Hi mihies,

That's a good question at the moment - while the gap between remoting and webservices is still relatively small particularly if you host remoting under iis. However the momentum behind webservices (WSE-*) is widening that gap and I would not be surprised if the arguments for webservices over "competing" technologies soon become ovewhelming - remember DCOM?

But for me the real issue is the assumption that there is only one "right" way to do webservices and service orientation in general may lead to unnecessary technological restrictions. Webservices aside, it is commonly accepted that long-running locking transactions at the database level are a bad design but that may not be the case for a single user application. Retrieving a million entities from a database is often seen as a bad design but that may not be the case in a batch processing scenario and so on. In these two examples, the key point is that the developer is free to do these things if they believe they are appropriate and sometimes they are. So my point is that these assumptions about how to "do" service orientation may well represent excellent guidance for good implementation patterns but they should not be enforced technologically. I.e. they are best practice guidelines not rules and the developer is ultimately responsible for applying them when appropriate.

I suspect I am preaching to a rather small audience so I will sign off now! Jascha

Well, not every technology is for every use. So, you would use webservices only if you want total interoperability and passing simple types makes sense. If you don't need such an interoperability you would use remoting as it behaves much better. Back to your original question. Frans' classes are very much .net specific so it wouldn't make sense to pass them in XML format around as one would use remoting instead.

mhnyborg
User
Posts: 14
Joined: 26-Apr-2006
# Posted on: 24-Nov-2006 22:09:34   

How can it be that Microsoft is "selling" smartclients and don't have an easy framework to communicate between the server and client. it's like every ORM and data framework is supposed to be used in a old fashion Client/server environment (LAN).

I have tried CSLA and Nettiers. But CSLA is missing the ORM part, OK I know about JCL but CSLA is a little to use-case driven for my taste. I like the domain model separate from the service layer. Nettiers is a fine piece of work, hat of, but it forces me to design my database as the domain model, not good.confused

For the last 3 weeks I have been playing around with LLBLGen and I like it a lot, but what is the best way to build a service layer? I would like to use WCF, but it seems that it is impossible! What kind of encoding have you been using? binary or MTOM

mihies avatar
mihies
User
Posts: 800
Joined: 29-Jan-2006
# Posted on: 25-Nov-2006 19:55:19   

mhnyborg wrote:

How can it be that Microsoft is "selling" smartclients and don't have an easy framework to communicate between the server and client. it's like every ORM and data framework is supposed to be used in a old fashion Client/server environment (LAN).

Remoting? Webservices? Raw TCP? What's easier than remoting?

mhnyborg wrote:

For the last 3 weeks I have been playing around with LLBLGen and I like it a lot, but what is the best way to build a service layer? I would like to use WCF, but it seems that it is impossible! What kind of encoding have you been using? binary or MTOM

It is perfectly possible. While I don't (yet) use WCF I do use remoting and it works just fine. What's your problem?

Posts: 13
Joined: 21-Nov-2006
# Posted on: 27-Nov-2006 15:09:44   

The point of WCF is to abstract the transport. We want develop a smart client application that is compliant with all WCF transports and then leave it to our operations department to figure out what transport / hosting option they like best for the server side components. This is what WCF gives you. You simply defer that work and that decision making until needed.

I was looking at a way to do this generically when I started this thread, but after reading Frans comments and the material on SOA; I have to agree with him for the most part. It means more "custom" code exists on the server, alot of which could be generated. This is not a big deal.

Quick question, will I have any problems serializing the PrefetchPath structures? I still would like to give the client coders some options when loading an entity?

Thanks,

-Casey

Dhominator avatar
Dhominator
User
Posts: 16
Joined: 28-Dec-2004
# Posted on: 27-Nov-2006 20:58:27   

Otis wrote:

That's not possible as these objects are a fundamental part of the inner workings of the o/r core. What I wonder is why you're doing the webservice communication at such a low level? Why not utilize the service at a very high level, i.e. on the functionality level. ?

Frans, the motivation is to abstract the transport [via wcf/other]... not to write a webservice. View the webservice as a concrete instance of the abstracted transport. Ideally, the core server-side code could be reused across concrete transports... each transport requiring a thin [server-side] transport adapter.

Conceptually, what is the problem using a DataAccessAdapter-like interface to the DAL? The "chatty" version being the Fetch* calls and the "less-chatty" version via UoW. This would seem to be a general purpose solution, with minimal effort.

I don't see xml serialization as a make-it or break-it issue. We just happened to spike against xml serialization. That being said... Is it a fundamental change to allow these type to be xml serializable?

Best, /jhd

John Dhom

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39612
Joined: 17-Aug-2003
# Posted on: 28-Nov-2006 11:26:30   

commonsensedev wrote:

The point of WCF is to abstract the transport. We want develop a smart client application that is compliant with all WCF transports and then leave it to our operations department to figure out what transport / hosting option they like best for the server side components. This is what WCF gives you. You simply defer that work and that decision making until needed.

I was looking at a way to do this generically when I started this thread, but after reading Frans comments and the material on SOA; I have to agree with him for the most part. It means more "custom" code exists on the server, alot of which could be generated. This is not a big deal.

Quick question, will I have any problems serializing the PrefetchPath structures? I still would like to give the client coders some options when loading an entity?

Thanks, -Casey

The current Xml serialization code in llblgen pro is focussed on transporting entities and their data by value via Xml. this is similar to how datatables etc. are used in webservices scenarios.

The XmlSerializer class can't work with interfaces, cyclic references etc. This means that it gives up quite easily and therefore classes need their own xml serialization code. This is currently not build in for a lot of classes, like predicates, relations, prefetch paths etc.

I'm not sure if there ever will be xml code which serializes them to xml (and back!) as it will be quite cumbersome. the thing is that by offering xml serialization for these, the sole purpose of how webservices should be used is mitigated. Webservices are NOT a lowlevel tier in your application. They're high-level services performing a well-defined piece of functionality in a call-response fashion. In other scenario's, there's little use for going through the trouble of adding overhead related a webservice, as there are better alternatives.

Ive peeked into the WCF docs to get a grip on what the core goal for WCF is and IMHO it's message based services with a well-defined service description/contract, exactly how I described it above, though through XML.

Nothing is impossible, so if it's required to have predicates and what not in xml form to fully utilize WCF (which I doubt), I'll add the necessary code. But make no mistake: it's far more efficient to offer a method like GetCustomer(string customerID)

than to offer GetCustomer(IPredicate customerFilter)

simply because the predicate -> xml and xml -> predicate will no matter what take more time and more data.

What people should avoid at all costs is the stupid way of how Microsoft tried to sell webservices upon us in vs.net 2002: a webservice isn't a lowlevel tier, most people agreeing on that now. (I'll leave the whole 'Is SOA even necessary' out of the equation for now). Webservices at a low-level will bring your service down sooner or later, both through chatty services and also because of the unmaintainability.

It's a myth that you can replace a tier with a webservice, a webservice isn't a tier. It's a feature/functionality provider, so it embeds more tiers.

Dhominator wrote:

Otis wrote:

That's not possible as these objects are a fundamental part of the inner workings of the o/r core. What I wonder is why you're doing the webservice communication at such a low level? Why not utilize the service at a very high level, i.e. on the functionality level. ?

Frans, the motivation is to abstract the transport [via wcf/other]... not to write a webservice. View the webservice as a concrete instance of the abstracted transport. Ideally, the core server-side code could be reused across concrete transports... each transport requiring a thin [server-side] transport adapter.

a webservice is about offering a welldefined piece of functionality. How it can be about defining a transport is beyond me, sorry.

Conceptually, what is the problem using a DataAccessAdapter-like interface to the DAL? The "chatty" version being the Fetch* calls and the "less-chatty" version via UoW. This would seem to be a general purpose solution, with minimal effort.

It's not how you should use webservices. You should call a method on a webservice which performs ALL the work for you, you shouldn't call 10 methods on a webservice to perform the work. That's the whole point. Needing to call 10 methods (10 is a random number I picked, you get the point) in a given order to complete a welldefined piece of functionality isn't doing it right (well, what 'right' is in this matter is highly subjective, but for me the other way is simply a reason to ask why you would use webservices in the first place): as you need 10 connects with the service, 10 times data being xml serialized, deserialized, processed by the service data is reported back: xml serialization xml deserialization etc. etc.

That's VERY Intense usage of a stack of layers which has a HIGH level of overhead. It's much more efficient to call 1 method, send the data the method requires, receive the result back and move on.

If that doesn't suit the application you're writing, it might be that webservices aren't the right way of doing things.

I see people having on a lan a webserver, and a db server with a webservice. The webserver is calling the webservice for data for the website. Nothing else uses the webservice. I don't understand this design: why not move the db code to the webserver and access the db server over the network? It's not that this is more secure, it's not: the webservice is also exposed with an api that serves the complete website. And if the db is a big oracle box with all the company data, perhaps the website should use a store-forward db in the middle.

The thing is: there's a lot of overhead involved in using a webservice. People seem to forget that but it's one of the most lame ways of doing RPC ever created. If you look at it without any strings to any technology, you can only wonder: "why on earth are people even spending time on this".

It's however reality for a lot of teams, mostly forced onto them by MS propaganda who want everyone be using webservices as their business model will move to webservices in a few years (or they think).

I don't see xml serialization as a make-it or break-it issue. We just happened to spike against xml serialization. That being said... Is it a fundamental change to allow these type to be xml serializable?

How can a dataaccessadapter be xml serializable? Xml is about data, not about objects which have live state on the server. That's the mismatch: a webservice is a piece of functionality and to use it you wrap your command for it with its data in xml, send it over and you'll receive a piece of xml back which you eventually interpret as something. But it's ANOTHER instance.

Remoting offers proxies: you call on the client a method which is wrapped in a command, send over to the server, and unwrapped there and the call is made there to the actual object. That's marshal by ref. Marshall by value, which datatables, entities etc. use, is simply wrapping the STATE of an object in a block of data, send it from the server to the client (or from client to the server) and RECREATE a new instance with the information in the block of data to mimic as if you're using the same object, which isn't the case.

There's a lot of bullshit floating around about SOA. A lot of times I read an architecture magazine/article about SOA I really wonder if the author even believes his/her own drivel. They've created their own self-fulfilling prophecy: they write articles, write books, speak at conferences but actually: no-one is helped. Is software writing 100% simpler as it was 10 years ago because of SOA? No. They create problems like "We have the order service and the customer service, but what if you want all customers based on a filter on orders?"... thats not a problem, it's only a problem in the context of these services, so you shouldn't create services like that: only high level services, which avoid these problems.

Now, that said, what we (solutions design) want is provide the functionality you NEED. We see little in providing you with things you might think you want but don't need and not providing you with things you actually needed. (that was a general remark, not projected on webservices per-se simple_smile ). So if successful WCF usage (that is: how it is supposed to be used as stated by its designers Box and Vasters) requires extensive more XML support, we'll add that, no questions asked.

What I've seen so far is that they propagate services on a high level which form a well defined piece of functionality on a high level which thus require transport of DATA back and forth, not objects.

Adding Xml serialization support isn't easy. Well: serialization isn't the hard part, it's deserialization AND keeping the xml compact. So it might be that an EntityRelation object for example is serialized as a call to a factory to create a new instance.

Frans Bouma | Lead developer LLBLGen Pro
jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 28-Nov-2006 14:55:16   

Hi Frans,

This is a heated debate indeed!

I definitely agree with you on two very significant points.

  1. The SOA bandwagon and its evangelists are massively overselling SOA and promoting it as the solution to many more problems than it is actually well suited to. The result, as you say, is that we are in danger of overcomplicating many solutions through the belief that soa/webservices is "good" and "more layers" = "better architecture" and crippling them through use of highly inefficient communication mechanisms that are often not needed.

  2. Following from the above, if you try to migrate a typical chatty "local" data access strategy on to a webservice platform, you will suffer a serious loss of performance and the project may fail as a result - partucularly if the connection between the endpoints is slow / high latency. Therefore the "service contract" needs to be designed at a suitably high level in order to overcome this.

That said, wcf seems (I am no expert but have attended the usual ms technology briefings) to have been designed to mitigate some of the performance problems for example by allowing compression of soap xml between endpoints that share the compression capability so much of the xml bloat is eliminated.

In terms of choice of technology, from what I can see, the momentum is (unfortunately) with webservices as a result of all the additional functionality you get from the ever expanding set of WS-* extensions. So, if you want security, transactional capability, reliable messaging, eventing and so on you are forced down the webservice route. Given that ms are pushing us that way, I am not sure why they seem to have undercooked xml serialisation and therefore placed an arbitrary restriction on what can (and can't) be included in a service contract. It would seem that many solutions are not going to be cross platform so passing .net classes should be fully supported - but that's an ms issue - maybe netfx 3 fixes this?

Where I think we disagree is over the difference between a high-level service contract and an open service contract. In the high-level contract, the messages (or (web) methods) are designed to try to avoid chattiness and the requirement for transactions across calls. In an open contract, the methods should avoid platform-specific types (i.e. no .net classes). So, to me, the difference bewteen your example GetCustomer methods is not the level but the openness. One could be called from java, the other not.

Taking your example a little further, in a crm system, there may be a use case where the user may request a list of customers meeting specified criteria. E.g. they may search on name, email address, post/zip code and so on. Support for this (and other) use cases could be implemented in the service contract as

GetCustomer(name, email, postcode);

If a new requirement arises that needs to be able to search on phone number as well, the above service contract must be changed. To avoid breaking the previous contract it would now become

GetCustomer(name, email , postcode); GetCustomer(name, email , postcode, PhoneNumber);

If you extrapolate this, the service contract will soon become bloated. If, on the other hand, the criteria had been a customer predicate, the service contract would remain the same for a large number of new requirements. I realise that in this (artificial) example, not all possible future requirements will be catered for but many will.

Similarly, if there are two use cases that need a specific customer's data but one requires the customer and its orders while the other requires the customer and its contacts, the same argument applies except we are replacing predicate with prefetch parameters. From a pragmatic point of view I prefer

GetCustomer(id, prefetchpath)

to

GetCustomerAndContacts(id) GetCustomerAndOrders(id) ... GetCustomerAndContactsAndOrders(id)

Again, the "level" of the above contract is the same - equivalent data is passed back and forth.

So, to wind up a long post, I would suggest that, continuing the excellently pragmatic design philosophy of LL, you should consider allowing the data-containing classes (entities and predicates, prefetch paths etc.) to be used in webservice contracts so that LL provides the highest level of productivity in solutions that require webservices (rightly or wrongly).

Jascha

Posts: 13
Joined: 21-Nov-2006
# Posted on: 28-Nov-2006 15:43:47   

From jaschag :

I prefer

GetCustomer(id, prefetchpath)

to

GetCustomerAndContacts(id) GetCustomerAndOrders(id) ... GetCustomerAndContactsAndOrders(id)

Again, the "level" of the above contract is the same - equivalent data is passed back and forth.

So, to wind up a long post, I would suggest that, continuing the excellently pragmatic design philosophy of LL, you should consider allowing the data-containing classes (entities and predicates, prefetch paths etc.) to be used in webservice contracts

I agree with this for the most part. I do prefer the

GetCustomer(id, prefetchPath);

interface to the more verbose "operation per possible combination". I had posted this to my companies internal wiki yesterday as a matter of fact. I do not think that it is necessary to pass a predicate list from the client to the server. That logic should live on the server and what should be passed could be a hashtable of parameters, and the predicate can be built on the server from those. This would be a light weight message format.

My interface would be something like

 FindCustomers(Hashtable customerFilters);

Should the client code need to have control over the comparison type of parameter (Like or Equal)? I am thinking it should not because 99% percent of the time it should always be the same if we are talking about reusable code.

If we throw SOA out of the picture, and just think about application layers or tiers, then a higher level interface than that of IDataAccessAdapter is probably preferable. I was probably going down the wrong road to begin with. Your UI layer should not talk to your business layer via the same interface as your business layer talks to your data access layer. You need hooks for rules checking, business processes, etc.

Maybe there is some confusion because there is not really a good seperation of what those layers are in LLBLGen Pro, or if there is I don't see it. There are some Business layer components mixed with the data access components. Lower level business logic, that is the product. It is hard to completely seperate the data access and business tiers. Does the data layer need to return a "Data Access Object" to the business layer, which then provides the business entity that developers will work with? This is hard, because you end up with circular references between the 2 if you break them into different libraries. This is something I have struggled with every time I have built these 2 seperate tiers myself.

Should the client even work the the business entities? Do they work with lighter weight objects, perhaps even objects generated, but not the business entities themselves? Do these objects have the code to consume the related services, or would you have manager classes in the client as well as the lighter weight objects.

Does all this impose restriction on the client developer? Yes. Is that a bad thing? Yes and No. I have worked in systems that for reasons good and bad replaced restrictions on how the business tier had to be consumed. This is not always bad, because it gives your applications some level of uniformity. The problem becomes when it is restrictive to the point where developers have to code around the tier to be productive or to accomplish an edge case task.

Sorry is I got a little off track in this post, my thoughts are a little fragmented this morning.

Posts: 13
Joined: 21-Nov-2006
# Posted on: 28-Nov-2006 23:03:18   

To answer my own question, it looks like PrefetchPath2 or for that matter PrefetchPathElement2 arent XmlSerializable either, so that means that this interface

GetCustomer(int id, PrefetchPath2 prefetchPath);

doesn't work as is.

Starting to think maybe thats the wrong path (pardon the pun) anyway.

-Casey

jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 29-Nov-2006 00:37:49   

Casey,

Aside from the fact that predicates and prefetch paths do not happen to be serialisable, what advantages are you expecting to see by passing:

Lightweight dto's instead of entities? Hashtables containing a list of criteria instead of predicates? <another means of specifying a object graph to retrieve> or an operation per combination instead of a prefetch path?

Jascha

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39612
Joined: 17-Aug-2003
# Posted on: 29-Nov-2006 10:08:24   

jaschag wrote:

Casey,

Aside from the fact that predicates and prefetch paths do not happen to be serialisable, what advantages are you expecting to see by passing:

They ARE serializable, over remoting. They aren't serializable to XML, as that doesn't make sense. XML is about data, not about objects.

That's a BIG difference. One could argue that using webservices by passing object state serialized into XML is actually a dirty hack.

Lightweight dto's instead of entities? Hashtables containing a list of criteria instead of predicates? <another means of specifying a object graph to retrieve> or an operation per combination instead of a prefetch path? Jascha

Entities are passed by value to / from webservices, same as datatables.

It's not a heated debate, there's just a lot of confusion created by the pundits, the professional bookwriters/speakers who created their own system to make themselves money. Check out what they tried to sell you 3, 4 years ago and what they're trying to sell you now. It's different, though why did they change their mind? Apparently they didn't know anything about what they're talking about 3-4 years ago. wink

So, to wind up a long post, I would suggest that, continuing the excellently pragmatic design philosophy of LL, you should consider allowing the data-containing classes (entities and predicates, prefetch paths etc.) to be used in webservice contracts so that LL provides the highest level of productivity in solutions that require webservices (rightly or wrongly).

entities are serializable to XML, their data is. predicates, prefetch paths don't contain actually data, they're command parameters.

As I said before: if it is really required to serialize these to XML, I'll add support for it, though keep in mind that because various objects aren't serializable to XML at the moment, you're forced to design your service interface differently, and that is IMHO not a bad thing. If the objects are XML serializable, people will tend to fall for the 'Old'-SOA pitfall.

But above all: keep in mind that xml webservices are great for having a full piece of functionality implemented in full so you can just call 1 or 2 routines and you'll get the full result you want (GetWeather(...) etc.) but they're NOT MEANT for serving as a tier in an application. Because in that situation, you're doing RPC with a high number of calls downwards, which is inefficient using XML all over the place.

Realize that if you don't use pure message based method calling now between class a and b, you shouldn't use a webservice call between a and b. Because that's the whole reason people want objects getting serialized to XML in the first place: they want to call a webservice as if it's an object locally in their app domain using normal method calls and it LOOKS like that's going on, but it's not.

Frans Bouma | Lead developer LLBLGen Pro
jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 29-Nov-2006 12:44:11   

Frans,

XML is about data, not about objects.

That's a BIG difference. One could argue that using webservices by passing object state serialized into XML is actually a dirty hack.

I get the feeling that we are not going to see eye to eye on this! To me, xml is just an encoding - it is semantically no different to binary. And webservices are just a standard protocol that happen to talk xml. I do not see why the protocol and its encoding should dictate how it is used. To me, the acid test of whether it is being used well is whether the system that uses it satisfies the requirements it was designed to meet rather than whether the communication between layers conforms to a common (messaging) pattern.

predicates, prefetch paths don't contain actually data, they're command parameters.

I'm talking about data in the more general sense - not just domain data - but any data including that which defines a message instance.

I get the feeling we have been here before - so maybe we should agree to disagree and move on...

It's not a heated debate, there's just a lot of confusion created by the pundits, the professional bookwriters/speakers who created their own system to make themselves money. Check out what they tried to sell you 3, 4 years ago and what they're trying to sell you now. It's different, though why did they change their mind? Apparently they didn't know anything about what they're talking about 3-4 years ago. wink

Hey - we agree!

As I said before: if it is really required to serialize these to XML, I'll add support for it, though keep in mind that because various objects aren't serializable to XML at the moment, you're forced to design your service interface differently, and that is IMHO not a bad thing. If the objects are XML serializable, people will tend to fall for the 'Old'-SOA pitfall.

I presume you can guess which way I would vote on this. FWIW I would guess it's fair to assume that most of the dev's that use LL are reasonably competent professionals. As such I think it is fair for you to expect them to be able to handle the responsibility for using webservices competently as you do for other aspects of the LL framework. To me that is no different to expecting them to use adapters sensibly - AFAIK you do not prevent them from fetching 10 million entities! Let's face it, there are countless ways that a dev can cripple application performance - so give them the power and expect them to use it responsibly.

Jascha

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39612
Joined: 17-Aug-2003
# Posted on: 29-Nov-2006 13:54:50   

jaschag wrote:

Frans,

XML is about data, not about objects. That's a BIG difference. One could argue that using webservices by passing object state serialized into XML is actually a dirty hack.

I get the feeling that we are not going to see eye to eye on this! To me, xml is just an encoding - it is semantically no different to binary. And webservices are just a standard protocol that happen to talk xml.

No, that's not the case, hence the difference in remoting and webservices: in webservices everything is passed by value, in remoting you can have proxies and marshall by ref.

Perhaps I didn't express myself correctly, but what I meant was that there's a fundamental difference with webservices: what gets across the wire is data, not objects. With remoting, you can have marshall by ref objects, which are local to your code, webservices are always remote.

I do not see why the protocol and its encoding should dictate how it is used. To me, the acid test of whether it is being used well is whether the system that uses it satisfies the requirements it was designed to meet rather than whether the communication between layers conforms to a common (messaging) pattern.

That's only possible if the overhead of the transport is minimal. With webservices this isn't the case. So you have to take into account what the consequences are. Passing object graphs across the wire is not the way to go.

The point is that an object is an INSTANCE which is relative to the process which created the object. A webservice by definition can't have any notion of that object as it's a well defined piece of functionality which stands basicly on its own.

Passing an object over the wire doesn't make any sense, as that's not what's happening. What's happening is that on the other end, a new instance is created and the same data is put into that new instance. Now, people can try to see that as teh same instance but that's not the case.

Entities are passable over an xmlwebservice wire because you can see them as mirrored copies of data in the db, so they're passed by value. Though what's passed is strictly the data and some control information to be able to deserialize the data into a correct object.

With remoting this actually also happens, but you can have objects passed by reference as well, which is a fundamental difference with webservices.

There have been numerous discussions about why IXmlSerializable and the XmlSerializer are so retarted compared to for example the Soap serializer which also produces Xml and CAN deal with cyclic references, hashtables, interface based types and the like. MS response basicly comes down to that webservices is about data and the xml is thus there to contain the data, it's not about objects. You shouldn't think in objects, but in data. As data can't have cyclic references (objects do, the data doesn't), the xmlserializer can't deal with it, also because the data has to be in a given fixed format.

This is also why the XmlSerializer has restrictions in the process of serializing 'objects' as it's not serializing objects, it's PROJECTING object data onto an XML schema.

A fundamental difference. This is also why writing code to support webservices is so hard: you've to convert your object in memory to data so it has any meaning to the other side. However: NOT IN an RPC kind of way where the client is on a given platform and thus the cllient can assume types known to the server

As data is exchanged, instead of OBJECTS, object related information has frankly no meaning on the server when send from the client because how would you otherwise be able to serve a java client with a .net webservice?

Because Microsoft has made a TERRIBLE mistake with vs.net and webservice support, a lot of developers think that a webservice is just like any other class or library with functions, but that's not the case.

predicates, prefetch paths don't contain actually data, they're command parameters.

I'm talking about data in the more general sense - not just domain data - but any data including that which defines a message instance.

I get the feeling we have been here before - so maybe we should agree to disagree and move on...

It's not about instances... instances are process related, a service has nothing to do with an object instance on the client and vice versa.

What does it mean when you return an entity object from a webmethod? The ONLY thing it means is that you return the entity data (the actual entity instance as known in the db) from the webmethod. That's it. You're not returning an object, that's my whole point.

As I said before: if it is really required to serialize these to XML, I'll add support for it, though keep in mind that because various objects aren't serializable to XML at the moment, you're forced to design your service interface differently, and that is IMHO not a bad thing. If the objects are XML serializable, people will tend to fall for the 'Old'-SOA pitfall.

I presume you can guess which way I would vote on this. FWIW I would guess it's fair to assume that most of the dev's that use LL are reasonably competent professionals. As such I think it is fair for you to expect them to be able to handle the responsibility for using webservices competently as you do for other aspects of the LL framework.

History has learned that this isn't the case unfortunately. IT's not as often as 2-3 years ago, but we still get occasionally people who want to return thousands of entities from a webmethod and wonder why it's so slow and why the xml is so big.

There's a reason why the dataaccessadapter object isn't remotable for example. I won't change that in the future, as it's fundamentally wrong to do so and leads to chatty bad applications.

Of course I can build support for that, and then people mail us a few days before their deadlines that their big server is burned down because of the high network traffic. That's not a cooked up situation, these things happen. In a way I don't even blame the developers, because they're working on a highly abstracted level and you've to realize that a distributed application is NOT the same as a single application and has very different characteristics and limitations. However, because of the high-level of abstraction when working with services, this get lost in the details rather quickly.

To me that is no different to expecting them to use adapters sensibly - AFAIK you do not prevent them from fetching 10 million entities! Let's face it, there are countless ways that a dev can cripple application performance - so give them the power and expect them to use it responsibly. Jascha

I dont see why I should simply implement it because people might need it even if it leads to bad applications. If the only situation in which a feature F is usable is actually an anti-pattern or a situation you should avoid, I strongly feel feature F shouldn't be in the code.

That's also why I think time invested in projections back from DTO's onto entities is far better spend. That way you pass data efficiently back and forth to a webservice and can project the results back on whatever objects you have in your application.

I also fail to see why a predicate should be passed from client to webservice for example. If you think it's required, please answer me this: what does a Java client have to pass to use the same webservice?

Also: how on earth can the CLIENT decide how data is fetched ? That information is unknown to the client.

But fundamentally: it's about at which level a webservice is placed in an application: if you use a webservice as a tier in your application, IMHO that's fundamentally wrong. If you use it as a well-defined piece of functionality which can stand on its own, that's the way to go, however these services thus live at a high level in your application otherwise you'll run into the well known customer - order service dilemma.

Could you for example describe to me why you prefer to use a webservice on a low level in your application as a remoted service which can't stand on its own?

Frans Bouma | Lead developer LLBLGen Pro
Posts: 13
Joined: 21-Nov-2006
# Posted on: 29-Nov-2006 16:16:47   

I almost feel guilty for starting the thread that resulted in a near holy war on web services, especially since my original post had nothing at all to do with webservices, except as a possible end point for my WCF client. Notice I used the word possible.

I believe my original code spike and the assumptions I made about what I might want to do was completely wrong. I agree with Frans on this. Nothing outside the generated entities, collections, helpers, etc needs to be passed accross the wire to the client. I think this is true no matter if you are talking about remoting or webservices.

-Casey

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39612
Joined: 17-Aug-2003
# Posted on: 29-Nov-2006 16:40:34   

Don't feel guilty and it's not a holy war simple_smile . It's such a fuzzy vague topic without any provable "This is the best way to do it" solution to any problem, so a lot of different Points of View exist.

Frans Bouma | Lead developer LLBLGen Pro
jaschag
User
Posts: 79
Joined: 19-Apr-2006
# Posted on: 29-Nov-2006 17:23:17   

Frans,

Just when I thought we had put this to bed I now have a bigger essay to write...

No, that's not the case, hence the difference in remoting and webservices: in webservices everything is passed by value, in remoting you can have proxies and marshall by ref.

Perhaps I didn't express myself correctly, but what I meant was that there's a fundamental difference with webservices: what gets across the wire is data, not objects. With remoting, you can have marshall by ref objects, which are local to your code, webservices are always remote.

I do realise that there are fundamental differences in the technologies and I take your point that ms are perhaps misguided in their efforts to make this transparent to the developer - btw - to further your despair - wcf seems to be doing a very good job of that.

If we consider what I would imagine is a typical pattern of usage, I see the two technologies as being used in quite similar fashions: a webservice implementing IServiceContract vs. a mbr remote service object implementing IServiceContract. If the service contract includes entities (mbv), then in both cases the the state of the entites will be serialised at the server and deserialised at the client. Granted, the serialisation technologies are different (otherwise we would probably not be discussing this) but the net effect is the same. Clearly, in neither case is the actual entity instance transferred but a semantic equivalent (at the domain level) is (which leads to awkward side-effects such as a db-populated identity value not being implicitly transferred back to the client even in the case of remoting).

That's only possible if the overhead of the transport is minimal. With webservices this isn't the case. So you have to take into account what the consequences are. Passing object graphs across the wire is not the way to go.

If a client use case provides a data entry form for an order including its order lines surely that data must be passed somehow. How would you pass the data equivalent of the object graph across the wire more efficiently?

There have been numerous discussions about why IXmlSerializable and the XmlSerializer are so retarted compared to for example the Soap serializer which also produces Xml and CAN deal with cyclic references, hashtables, interface based types and the like. MS response basicly comes down to that webservices is about data and the xml is thus there to contain the data, it's not about objects. You shouldn't think in objects, but in data. As data can't have cyclic references (objects do, the data doesn't), the xmlserializer can't deal with it, also because the data has to be in a given fixed format.

Let's not let ms inconsistencies cloud the argument - that fact that the xmlserializer cannot handle cyclic references does not mean that data, per se, cannot have cyclic references - that's just a matter of how it is stored.

This is also why the XmlSerializer has restrictions in the process of serializing 'objects' as it's not serializing objects, it's PROJECTING object data onto an XML schema. A fundamental difference. This is also why writing code to support webservices is so hard: you've to convert your object in memory to data so it has any meaning to the other side. However: NOT IN an RPC kind of way where the client is on a given platform and thus the cllient can assume types known to the server

As data is exchanged, instead of OBJECTS, object related information has frankly no meaning on the server when send from the client because how would you otherwise be able to serve a java client with a .net webservice?

Guess what - I agree! I would point out that I have mentioned a few times in my posts that I am referring to systems where interoperability is not a requirement.

Because Microsoft has made a TERRIBLE mistake with vs.net and webservice support, a lot of developers think that a webservice is just like any other class or library with functions, but that's not the case.

So how do you really feel about ms webserviceswink ??? Careful with that partner status!

What does it mean when you return an entity object from a webmethod? The ONLY thing it means is that you return the entity data (the actual entity instance as known in the db) from the webmethod. That's it. You're not returning an object, that's my whole point.

Agreed - on a technical level the instances are different - but the domain entity that the instance represents is the same and that is what I am primarily interested in. If I clone an entity instance I have a new instance but that does not affect how I might use it from a domain or business perspective - I can modify it, call methods on it and save it as necessary to fulfil a workflow or business process - for the most part it really doesn't matter how that instance came to be.

History has learned that this isn't the case unfortunately. IT's not as often as 2-3 years ago, but we still get occasionally people who want to return thousands of entities from a webmethod and wonder why it's so slow and why the xml is so big.

There's a reason why the dataaccessadapter object isn't remotable for example. I won't change that in the future, as it's fundamentally wrong to do so and leads to chatty bad applications.

Of course I can build support for that, and then people mail us a few days before their deadlines that their big server is burned down because of the high network traffic. That's not a cooked up situation, these things happen. In a way I don't even blame the developers, because they're working on a highly abstracted level and you've to realize that a distributed application is NOT the same as a single application and has very different characteristics and limitations. However, because of the high-level of abstraction when working with services, this get lost in the details rather quickly.

I wish you luck on your crusade to protect developers from themselves - that's a big ask.

I dont see why I should simply implement it because people might need it even if it leads to bad applications. If the only situation in which a feature F is usable is actually an anti-pattern or a situation you should avoid, I strongly feel feature F shouldn't be in the code.

This is the heart of our differences - that fact that some functionality may be abused does not mean it will. It is a bold claim that the "only situation" that such functionality could be used is an anti-pattern - sure it can be used poorly but there are valid situations too i.e. those that work well. Development languages, environments, frameworks are littered with potential for abuse. For example, in the DataAccessAdapter you have a KeepConnectionOpen property - that may lead to a "classic" anti-pattern and, if used inappropriately, may lead to catastrophic problems. But, as you state in the docs, "This can give extra performance, especially in code where multiple database fetches are used in one routine.". I.e. careful with this but when used correctly it is very useful.

That's also why I think time invested in projections back from DTO's onto entities is far better spend. That way you pass data efficiently back and forth to a webservice and can project the results back on whatever objects you have in your application.

I'm not sure what the difference in efficiency is between passing an entity or its equivalent dto since they are likely to contain the same data. The advantage you do have is the decoupling of the service contract from the entity so you can change an entity internally without changing the service contract - but that argument applies to any layer boundary not just (web)service.

I also fail to see why a predicate should be passed from client to webservice for example. If you think it's required, please answer me this: what does a Java client have to pass to use the same webservice? Also: how on earth can the CLIENT decide how data is fetched ? That information is unknown to the client.

As above - I am referring to non-interoperable services. The client is deciding what should be fetched not how.

But fundamentally: it's about at which level a webservice is placed in an application: if you use a webservice as a tier in your application, IMHO that's fundamentally wrong. If you use it as a well-defined piece of functionality which can stand on its own, that's the way to go

Why can a tier not be a "well-defined piece of functionality which can stand on its own" - are the two mutually exclusive?

otherwise you'll run into the well known customer - order service dilemma.

Not that I need another dilemma, but could you explain that one - it si not well known to me (or googlewink )

Could you for example describe to me why you prefer to use a webservice on a low level in your application as a remoted service which can't stand on its own?

By low level, I assume you mean fine-grained contracts that encourage chatty conversations. As stated in previous posts, I do not intend to use webservices at a low level - just because I may want to use LL classes in service contracts does not imply that they are low level - the contract can perform high level functions using domain entities - such entities tend to be the "language" of the business / domain and are therefore integral to most associated processes.

Why webservices? Because, thanks to ms & others, webservices are packed with useful functionality and consequently are more and more attractive as they become the de-facto protocol for distributed applications whether interoperable or not.

Casey: I believe my original code spike and the assumptions I made about what I might want to do was completely wrong. I agree with Frans on this. Nothing outside the generated entities, collections, helpers, etc needs to be passed accross the wire to the client. I think this is true no matter if you are talking about remoting or webservices.

And then there was one... it's a lonely place in my cornercry FWIW Casey, I think Frans is suggesting that not even entities should be transfered...

Jascha

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39612
Joined: 17-Aug-2003
# Posted on: 29-Nov-2006 18:50:14   

jaschag wrote:

Frans, Just when I thought we had put this to bed I now have a bigger essay to write...

I didn't have the idea this thread was done...

No, that's not the case, hence the difference in remoting and webservices: in webservices everything is passed by value, in remoting you can have proxies and marshall by ref. Perhaps I didn't express myself correctly, but what I meant was that there's a fundamental difference with webservices: what gets across the wire is data, not objects. With remoting, you can have marshall by ref objects, which are local to your code, webservices are always remote.

I do realise that there are fundamental differences in the technologies and I take your point that ms are perhaps misguided in their efforts to make this transparent to the developer - btw - to further your despair - wcf seems to be doing a very good job of that.

Take a random page from the WCF docs, it will likely talk about contract first services, where you define (recommended) an interface for a service and then implement that service. They won't be talking about passing object graphs back and forth. that is the point I was making.

Trust me: I've made the same mistake in the past where I thought that the XmlSerializer should simply be able to serialize an object graph to xml and back. This was a mistake, based on the assumption that webservices is just about having a remote service to which or from which you transport object graphs as if you're calling a lower tier's routine.

That's not the purpose of webservices and therefore not the purpose of XmlSerializer or other xml webservices supporting framework code. Now, you might think I sound like someone who just thinks he knows it all and tries to sell you a story about SOA. I don't even like SOA, so that's not it. I just try to explain why webservices are about messaging and high-level services and not about lowlevel services which consume/produce object graphs.

I also don't care what MS sold us some time ago, it simply doesn't work. Sooner or later you'll pay the price of using low-level webservices with object graphs, simply because the overhead is too high, the performance isn't up to par etc. etc.

That's only possible if the overhead of the transport is minimal. With webservices this isn't the case. So you have to take into account what the consequences are. Passing object graphs across the wire is not the way to go.

If a client use case provides a data entry form for an order including its order lines surely that data must be passed somehow. How would you pass the data equivalent of the object graph across the wire more efficiently?

In the message send to the service to process the data, and the container data is stored in is typically a Data Transfer Object (DTO). You can use an entity as a DTO if you want, you can also project the data in the entity onto simpler objects which are easier to project onto an XML schema. Because that's actually what's happening.

There have been numerous discussions about why IXmlSerializable and the XmlSerializer are so retarted compared to for example the Soap serializer which also produces Xml and CAN deal with cyclic references, hashtables, interface based types and the like. MS response basicly comes down to that webservices is about data and the xml is thus there to contain the data, it's not about objects. You shouldn't think in objects, but in data. As data can't have cyclic references (objects do, the data doesn't), the xmlserializer can't deal with it, also because the data has to be in a given fixed format.

Let's not let ms inconsistencies cloud the argument - that fact that the xmlserializer cannot handle cyclic references does not mean that data, per se, cannot have cyclic references - that's just a matter of how it is stored.

Data can't have references. How would you store in XML a cyclic reference between one EmployeeEntity object and another EmployeeEntity object ?

Or better: one customer object and 10 order objects ?

You can only insert marker elements which suggest a cycle in references but only for interpreting code. There's nothing in the Xml Schema syntaxis which allows you to define a cyclic reference because it doesn't make sense.

LLBLGen Pro code which produces XML inserts marker elements which then are replaced by the entity the refer to (using GUIDs) in the object which gets the xml data during deserialization and this is in a second pass. But it relies heavily on the interpretation of the xml and it interprets data in the XML as object references, which is odd if you consider that a webservice should address any client.

Because Microsoft has made a TERRIBLE mistake with vs.net and webservice support, a lot of developers think that a webservice is just like any other class or library with functions, but that's not the case.

So how do you really feel about ms webserviceswink ??? Careful with that partner status!

I think with WCF they do a step in the right direction, but still propagate the technique too vaguely and promote it for projects which don't benefit from it at all because developers apply the technique completely wrong (but not to their knowledge, so that's the sad part)

What's also left untouched is the core question: "WHY?". I like to ask that question a lot, and often I don't see why webservices would be the BEST option for the problem at hand. You see: webservices should be the answer to the question which technique is the BEST technique to use in the system to build. And with best I mean: with the most pro's and the least con's.

I dont see why I should simply implement it because people might need it even if it leads to bad applications. If the only situation in which a feature F is usable is actually an anti-pattern or a situation you should avoid, I strongly feel feature F shouldn't be in the code.

This is the heart of our differences - that fact that some functionality may be abused does not mean it will. It is a bold claim that the "only situation" that such functionality could be used is an anti-pattern - sure it can be used poorly but there are valid situations too i.e. those that work well. Development languages, environments, frameworks are littered with potential for abuse. For example, in the DataAccessAdapter you have a KeepConnectionOpen property - that may lead to a "classic" anti-pattern and, if used inappropriately, may lead to catastrophic problems. But, as you state in the docs, "This can give extra performance, especially in code where multiple database fetches are used in one routine.". I.e. careful with this but when used correctly it is very useful.

Please re-read what I said. I said: IF the only situation in which a feature F is usable is actually an anti-pattern or a situation you should avoid... So only in that case, F shouldn't be in the code and I won't change that motto.

Your examples are about different cases where a feature has a usable scenario which isn't an antipattern or something you should avoid. KeepConnectionOpen is very useful in routines where multiple actions have to be done in a sequence so it's more efficient to keep the connection open (if you don't start a transaction). I was talking about features which don't have a scenario in which you can say: that's something useful and not something you should avoid.

Making objects to be convertable to XML 'just because', promotes the usage of them in xml webservices scenario's which you should avoid, as it moves AWAY from the whole vision Box and Vasters have layed out with WCF.

That's also why I think time invested in projections back from DTO's onto entities is far better spend. That way you pass data efficiently back and forth to a webservice and can project the results back on whatever objects you have in your application.

I'm not sure what the difference in efficiency is between passing an entity or its equivalent dto since they are likely to contain the same data. The advantage you do have is the decoupling of the service contract from the entity so you can change an entity internally without changing the service contract - but that argument applies to any layer boundary not just (web)service.

Projecting entity data onto a class which is the message is more efficient as the XmlSerializer then can generate C# code to produce XML and you dont' rely on the Xml producing code in an entity which uses reflection.

But fundamentally: it's about at which level a webservice is placed in an application: if you use a webservice as a tier in your application, IMHO that's fundamentally wrong. If you use it as a well-defined piece of functionality which can stand on its own, that's the way to go

Why can a tier not be a "well-defined piece of functionality which can stand on its own" - are the two mutually exclusive?

Because a tier is used as part of a piece of functionality. Simple example: a routine calls in sequence 3 methods of a service: method A is called, data is returned, data is used in some logic, data is then used as input on method B and the process repeats itself with method C.

Not only are you calling 3 times a service, you are also passing data back and forth 3 times which are all intermediate results. Add to that that there is a fixed order in which the methods apparently have to be called: A, B and then C.

Better would have been to make 1 method, D, on the service which performs A, B and C and the processing of the intermediate results and simply returns the result C would have.

Compare it with a method you have in a form which calls 5 methods on an object. You could then also argue that you've placed logic which belongs inside that object to the form. It's considered by many a better design in a lot of cases to place the calls to the 5 methods inside the object, to make the control the concern of the object, not the form.

I deliberately said "in a lot of cases" and not "always" as there are cases, also not that rarely, where you want the logic to be placed outside the class you're calling the methods on (i.e static bl methods for example). Other examples are multi-step webservices where you first have to login, call one or two methods and then complete the transaction.

The advantage is that the caller is freed from the overhead and micromanagement about what he actually wanted to do: get the result C returned. Because what happens if B is split up or an extra step is needed? With the method on the server, you can handle this. This only works with high-level services, where the functionality provided by the service is standing on its own, so a vertical slice of your application from top to bottom, not a horizontal slice. We'll get to that in a second.

otherwise you'll run into the well known customer - order service dilemma.

Not that I need another dilemma, but could you explain that one - it si not well known to me (or googlewink )

The customer - order dilemma is what brought the old SOA thinkers back to the drawing board.

Take for example an ordering system, with orders, products, customers etc. and you're asked to design the webservices for this system. A few years ago, people would say: Oh, you need an Order service, which handles the order-related stuff, a Customer service which handles the customer related stuff etc.

However then comes the problem: Show me the top 10 customers with their total order count over the last 6 months.

Is this a question for the order service? No, it's about customers. Is this then a question for the customer service? Perhaps, but then it has to consult the order service, and in a very inefficient way, as it has to merge/join/aggregate data itself, as it can't use a single query.

Today you won't see any SOA speaker mentioning these kind of services. And it's a big problem, because if you're not careful enough and thus place your service at a low level in your application, you either end up with one big phat service (unmaintainable) or you end up with a couple of services which are used inefficiently UNLESS you cheat and place methods on services which actually don't belong there (as my example above).

With services which provide a well-defined piece of functionality, you don't have this problem, as everything the service needs to complete the functionality it can do itself.

perhaps 'well defined piece of functionality' is too vague as well, I'm not sure. I'll mention as example WeatherService.GetWeather("The Hague, NLD");

A bit boring but it illustrates what I meant: a service provides a feature which can be completely performed by the service, namely providing weather details for a given location. The client doesn't have to call 2 or 3 methods to obtain all weather details (like wind, rain, temperature), it's all in one service method.

Could you for example describe to me why you prefer to use a webservice on a low level in your application as a remoted service which can't stand on its own?

By low level, I assume you mean fine-grained contracts that encourage chatty conversations. As stated in previous posts, I do not intend to use webservices at a low level - just because I may want to use LL classes in service contracts does not imply that they are low level - the contract can perform high level functions using domain entities - such entities tend to be the "language" of the business / domain and are therefore integral to most associated processes.

Ok, but using predicates etc. ties your webservice to your client and place logic which should be on the service in the client. This means that the client knows how to fetch a customer for example (by providing the predicate for the customer) instead of passing the customerid to the service and letting the service find out how to obtain the customer.

Why webservices? Because, thanks to ms & others, webservices are packed with useful functionality and consequently are more and more attractive as they become the de-facto protocol for distributed applications whether interoperable or not.

You'd think? I think it won't take of, not in the scale MS wants to. The reason is simple: webservices aren't for every application AND by using them, you have to think carefully about your architecture: where do I pull/push large blocks of data etc. etc.

Don't get me wrong, a webservice which provides translation services for texts for example, is great. A webservice which is solely there because the application apparently needed to be distributed and to be run on 2 systems, is probably not the best solution due to the overhead.

Webservices are also about interoperability: functionality is provided to clients who consume that functionality, despite their platform etc. and this is simply because only data is used, not object graphs / object instance information.

Of course, it's your call, it's your project. Though choosing webservices has to be a solution to a given problem and all other options must have been a lesser option. For example: the generated code and runtime is fully supporting remoting. Why not opting for remoting if you don't care if you tie your client code to the service by sending the predicates etc. over the wire?

Casey: I believe my original code spike and the assumptions I made about what I might want to do was completely wrong. I agree with Frans on this. Nothing outside the generated entities, collections, helpers, etc needs to be passed accross the wire to the client. I think this is true no matter if you are talking about remoting or webservices.

And then there was one... it's a lonely place in my cornercry FWIW Casey, I think Frans is suggesting that not even entities should be transfered...

If I suggested that, I would have written that. Where do I suggest entities shouldn't be used as DTO's for example? It's not the most efficient way, but it can be an option.

Before you say "but why are predicates ISerializable then?": they're serializable for remoting because serialization is also used for binary storage of objects in e.g. databases, in temp files or other. As it's faster than generating XML it's often a better choice, so that's a reason why they're serializable and because there's a useful way to use the serialized data (also because 3 years ago, distributed applications were better of with remoting and not a lot was thought out about SOA). Still remoting isn't magic either if you're not careful.

I haven't found a reason to serialize predicates etc. to XML to validate the addition of feature, also because webservices today with WCF etc. are about different things, so predicates have no purpose in that.

Would it be better if predicates weren't serializable either for remoting? In a sense.. yes.

Frans Bouma | Lead developer LLBLGen Pro
1  /  2