The roundtrips you save are small, as the data you have to send for the update have to be sent in either occasion (proc or dyn. query).
Our queries are actually more efficient, as the query will update just the fields which are changed, so our update statements are tailored towards what has to be updated. A proc which updates a table, which has 10 fields for example, always has to accept all fields, (which might be nullable) and it then requires inefficient isnull checks to update only the fields which have a parameter which aren't null.
So if you have 20 entities in a collection, 3 of them have 2 fields changed, you get 3 update statements in a transaction, with each update statement updating the 2 fields, and only taht data is send. Your procs can't be that specific, otherwise you've to write for every possible different set of changed fields a proc, which is undoable, so people tend to revert to less optimal procs with nullable parameters and isnull / coalesce checks.
Updates aren't batched, but that's also not that significant. You win way more with specially crafted queries for the update required, e.g. field a and b are changed and should be updated in the table.
Apart from that, writing a proc for this is not really scalable: if something changes, you have to update the proc, likely pass more / less parameters etc. etc. while with entities, you don't have to do that.
But let the sceptical programmers write some test code and measure it. Update 3 tables of 10 fields and update in each table 2 fields. They can't possibly write a proc for these 2 fields specifically, as in other occasions it might be 3 fields or more have been changed, and another proc has to be used. You get stuff like:
http://weblogs.asp.net/fbouma/pages/7049.aspx