I am in the process of refactoring a lot of code. I have thick and thin clients using facade methods that return generic datasets. To lighten the load on the UI developers, I thought I could write a simple helper class like this:
Public Class CollectionHelpers
Public Shared Function ToDataset(ByVal useCollection As EntityCollectionBase2) As DataSet
return CollectionHelpers.ToDataset(useCollection, "dataset", "table")
End Function
Public Shared Function ToDataset(ByVal useCollection As EntityCollectionBase2, ByVal datasetName As String) As DataSet
return CollectionHelpers.ToDataset(useCollection, datasetName, "table")
End Function
Public Shared Function ToDataset(ByVal useCollection As SD.LLBLGen.Pro.ORMSupportClasses.EntityCollectionBase2, ByVal datasetName As String, ByVal tableName As String) As System.Data.DataSet
Dim fields As IEntityFields2 = useCollection.EntityFactoryToUse.CreateFields
If fields.Count > 0 Then
Dim dt As New DataTable(tableName)
Dim field As IEntityField2
Dim i As Integer
For i = 0 To fields.Count - 1
field = fields(i)
dt.Columns.Add(field.Name, field.DataType)
Next
Dim entity As IEntity2
For Each entity In useCollection
Dim dr As DataRow = dt.NewRow
For i = 0 To fields.Count - 1
dr(i) = entity.GetCurrentFieldValue(fields(i).FieldIndex)
Next
i = 0
dt.Rows.Add(dr)
Next
If dt.Rows.Count > 0 Then
Dim ds As New DataSet(datasetName)
ds.Tables.Add(dt)
ds.AcceptChanges()
Return ds
End If
Else
Exit Function
End If
End Function
End Class
And basically the facade code would look like this:
Public Function GetAccountNotes() As DataSet
Dim col As New HelperClasses.EntityCollection(New FactoryClasses.ArAccountNotesEntityFactory)
Dim db As New DataAccessAdapter(False)
db.FetchEntityCollection(col, Nothing)
Return HelperClasses.CollectionHelpers.ToDataset(col)
End Function
Ideally I would normally use the collection objects themselves in the UI but as Otis has mention before, if it isnt broke then dont fix it. I could also use stored procedures, but that requires a codegen to be run. The helpers could be slow if the data set is really really large, but I think its ok.
Does anyone have any thoughts regarding this particular implementation?