This is unavoidable, sadly. We replace the database related elements in the lambda with fetches from the datareader and leave the rest, then compile the lambda and use that to create new instances for each row from the datareader. So your lambda becomes:
row, indexes =>
new UserForAttributeCalculation {
DepartmentRoleId = row[indexes[0]],
Login = row[indexes[1]],
Tenants = new List<TenantEntity>(),
EndDate = row[indexes[2]]
}
This means that the code inside the lambda that runs in-memory isn't interpreted. I.e. we don't analyze what's there as it can in theory be a big piece of in-memory code and we then have to analyze everything and interpret what the analyzer sees and then act accordingly.
What I suspect is that the .NET clr/jit optimizes this per-lambda instance, as we cache the lambda, so we re-use it for the same query and for each row. Frankly I don't know why it doesn't work as all we do is simply execute the compiled lambda so it should create a new list every time.
A workaround might be:
[MethodImpl(MethodImplOptions.NoInlining)]
private List<T> CreateEmptyEntityList<T>() => new List<T>();
And instead of new List<TenantEntity>(), use a call to this method.
var metadata = new LinqMetaData(adapter);
_usersHashedById = metadata.User.Select(u =>
new UserForAttributeCalculation {
DepartmentRoleId = u.DepartmentRoleId,
Login = u.Login,
Tenants = CreateEmptyEntityList<TenantEntity>(),
EndDate = u.EndDate
}).ToDictionary(u => u.UserId);