LLBLGen Pro documentation Welcome! What's new in LLBLGen Pro v2.

6 Migrating your code Getting started Compact Framework / Sql CE support Preface Requirements Supported functionality Compiling your code / using the code LLBLGen Pro functionality not available Concepts Entities, typed lists and typed views Preface Entities Entity Attributes Entity Relations Typed lists Typed views Stored procedure calls O/R Mapping Preface Mapping with entities An entity's life-cycle Entity inheritance and relational models Preface Supertypes / subtypes and NIAM terminology Hierarchy creation for proper entity relation modelling Hierarchy creation for proper entity field availability modelling Comparison of inheritance mapping with competing O/R mappers Abstract entities Limitations and pitfalls Stateless persistence Preface Stateless development and O/R Mapping User state Task based code generation

1 2 4 9 17 20 20 20 20 20 20 22 24 24 24 24 24 24 24 24 27 27 27 27 30 30 30 30 30 30 30 30 35 35 35 35 37

Page 1

Preface Tasks, task groups, .tasks and .preset files Templates and Template Groups Preface Templates Included template files SelfServicing Adapter When to use which template group? Database drivers Preface Database driver's tasks Supported features per database driver Dynamic SQL Preface Dynamic SQL and Dynamic Query Engines Type Converters Preface Using a type converter Type converter structure Dependency Injection and Inversion of Control Preface Inversion of Control (IoC) by using Dependency Injection (DI) LLBLGen Pro's ways to inject dependent objects into entities Using the designer The GUI elements Preferences and Project properties Preface User preferences Project properties Abbreviation conversions Creating a project Preface Project information Database information SqlServer specific warning Assembly load resolving file

37 37 39 39 39 39 39 39 39 43 43 43 43 47 47 47 48 48 48 48 51 51 51 51 54 55 57 57 57 57 57 70 70 70 70 70 70

Page 2

Adding and editing entities Preface Adding entities Editing an entity Inheritance info sub tab Fields mapped on database fields sub tab Fields mapped on relations sub tab Relations sub tab Fields mapped on related fields sub tab Code generation options sub tab Viewing entity information Adding custom relations Preface Adding 1:1/1:n/m:1 relations Adding m:n relations Adding and editing typed lists Preface Adding a Typed List Constructing a Typed List Entities selection sub tab Fields mapped on entity fields sub tab Custom properties sub tab Adding and editing typed views Preface Adding a Typed View Editing a Typed List Fields mapped on view fields sub tab Custom properties sub tab Adding and editing stored procedure calls Preface Adding a stored procedure call Editing a stored procedure call Parameters sub tab Custom properties sub tab Defining and using type conversion definitions Preface

74 74 74 74 74 74 74 74 74 74 74 90 90 90 90 93 93 93 93 93 93 93 98 98 98 98 98 98 102 102 102 102 102 102 105 105

Page 3

Creating a new / editing an existing type conversion definition Using existing type conversion definition Setting type converters by using a plug-in Setting type converters automatically Inheritance mapping Preface Creating hierarchies of type TargetPerEntity Creating hierarchies of type TargetPerEntityHierarchy Viewing hierarchies Destroying hierarchies Working with plug-ins Preface Viewing installed plug-ins Running plug-ins on a single target Running plug-ins on multiple targets Designer Events and plug-ins Setting up pluralization and singularization of names Refreshing the catalog schemas Preface Refreshing the catalog schemas Correcting mappings Generating code Preface Configuring the generation process General settings tab Template bindings tab Task queue to execute tab Starting the generation process Using the generated code Compiling the code Preface Compiling Using the compiled assembly Database specific features Preface SqlServer specific features

105 105 105 105 108 108 108 108 108 108 115 115 115 115 115 115 115 121 121 121 121 125 125 125 125 125 125 125

134 134 134 134 138 138 138

Page 4

NEWSEQUENTIALID() support Compatibility mode ArithAbort support User Defined Types support SqlServer CE Desktop support Oracle specific features Ansi joins Trigger-based sequence values Adapter DataAccessAdapter functionality Preface Functionality Persistence Info Connection strings Catalog specific persistence info Schema specific persistence info Command timeouts Recursive saves Fetching/deleting/saving entities/typed lists/typed views Calling stored procedures Transactions Intercepting activity calls ArithAbort flag (SqlServer only) DQE Compatibility mode(SqlServer only) Using the entity classes Preface Instantiating an existing entity Using the primary key value Using a related entity Using a unique constraints value Using a prefetch path Using a collection class Using a Context object Polymorphic fetches Creating a new / modifying an existing entity Modifying an entity

138 138 138 138 138 138 138 138 141 142 142 142 142 142 142 142 142 142 142 142 142 142 142 142 147 147 147 147 147 147 147 147 147 147 147 147

Page 5

Setting the EntityState to Fetched automatically after a save FK-PK synchronization Deleting an entity Entity state in distributed systems Concurrency control Entities, NULL values and defaults Extending an entity by intercepting activity calls IDataErrorInfo implementation Using the entity collection classes Preface Entity retrieval into an entity collection object Using a related entity Using a prefetch path Using a collection object Using m:n relations Entity data manipulation using collection classes Updating entities in a collection in memory Deleting one or more entities from the persistent storage Client-side sorting Finding entities inside a fetched EntityCollection Hierarchical projections of entity collections Example Tracking entity remove actions Using entityviews with entity collections Preface Creating an EntityView2 instance Filtering and sorting an EntityView2 .NET 2.0: Use a Predicate(Of T) or Lambda expression for a filter Multi-clause sorting Filtering using multiple predicates View behavior on collection changes Projecting data inside an EntityView2 on another data-structure Projection objects: EntityPropertyProjector Distinct projections Using the context Preface

147 147 147 147 147 147 147 147 162 162 162 162 162 162 162 162 162 162 162 162 162 162 162 172 172 172 172 172 172 172 172 172 172 172 183 183

Page 6

The Context class Using the Context class Retrieving instances from a Context Single entity fetches Prefetch Path fetches Entity Save calls Multi-entity activity Remarks Using TypedViews, TypedLists and Dynamic Lists Using the typed view classes Preface Instantiating and using a Typed View Instantiating and filling a Typed View Reading a value from a filled Typed View Limiting and sorting a typed view Using the typed list classes Preface Instantiating and using a Typed List Using dynamic lists Preface Creating dynamic lists Using GROUP BY and HAVING clauses Preface Using GroupByCollection and Having Clauses Calling a stored procedure Preface Retrieval Stored Procedure Calls Action Stored Procedure Calls Wrap call in IRetrievalQuery object Fetching DataReaders and projections Preface Fetching a resultset as an open IDataReader Fetching a Retrieval Stored Procedure as an IDataReader Fetching a Dynamic List as an IDataReader Resultset projections Projecting Stored Procedure resultset onto entity collection

183 183 183 183 183 183 183 183

188 188 188 188 188 188 191 191 191 193 193 193 195 195 195 197 197 197 197 197 199 199 199 199 199 199 199

Page 7

Projecting Dynamic List resultset onto custom classes Filtering and Sorting Getting started with filtering Preface Upgrading from v1.0.200x.y: no PredicateFactory Predicates and Predicate expressions Creating and working with field objects Setting aliases, expressions and aggregates on fields What to include in a filter The predicate system Preface The predicate classes Native language filter construction FieldBetweenPredicate FieldCompareExpressionPredicate FieldCompareNullPredicate FieldCompareRangePredicate FieldCompareSetPredicate FieldCompareValuePredicate FieldFullTextSearchPredicate FieldLikePredicate AggregateSetPredicate DelegatePredicate MemberPredicate Advanced filter usage Preface Negative predicates Filtering on entity type Multi-entity filters Custom filters for EntityRelations Weak relations Advanced filtering Sorting Preface Upgrading from v1.0.200x.y: no SortClauseFactory Sorting

199

206 206 206 206 206 206 206 211 211 211 211 211 211 211 211 211 211 211 211 211 211 211 226 226 226 226 226 226 226 226 233 233 233 233

Page 8

Case-insensitive sorting Prefetch paths Preface Using Prefetch Paths, the basics Optimizing Prefetch Paths Polymorphic Prefetch Paths Multi-branched Prefetch Paths Advanced Prefetch Paths Single entity fetches and Prefetch Paths Prefetch Paths and Paging Excluding / Including fields for fetches Preface Fetching excluded fields in batches Entity fetch example Prefetch path example Transactions Preface Normal native database transactions Transaction savepoints COM transactions .NET 2.0: System.Transactions support Databinding at designtime and runtime Databinding with Windows Forms / ASP.NET 1.x Preface Implemented functionality Design time support VS.NET 2002/2003 Design time support VS.NET 2005 Databinding and inheritance Databinding with ASP.NET 2.0 Preface Getting started with the LLBLGenProDataSource2 control Caching of data Two way databinding LivePersistence and events Using the LLBLGenProDataSource2 control Intercepting activity

233 235 235 235 235 235 235 235 235 235 242 242 242 242 242 245 245 245 245 245 245

254 254 254 254 254 254 257 257 257 257 257 257 257 257

Page 9

The PerformWork event in an AJAX environment Filtering on the fly Trapping invalid input values Usage examples Example using LivePersistence Example using Perform Event handlers Setting values for insert/update using bound parameters Converting empty string values to NULLs for inserts / updates The SortingMode property Collection/typed list/typed view paging Preface Paging through an entity collection Get the total number of objects Paging through a TypedList or TypedView Get the total number of objects UnitOfWork and field data versioning Preface Unit of work usage Single entities Entity collections Stored procedures DeleteEntitiesDirectly and UpdateEntitiesDirectly Monitoring Specifying the order in which the actions are executed Field data versioning Distributed Systems .NET remoting support Preface Enabling FastSerialization Serializing / Deserializing custom entity data Normal serialization / Deserialization FastSerialization Serialize / Deserialize RemovedEntitiesTracker with FastSerialization XML Webservices / WCF support Preface Example usage

257 257 257 257 257 257 257 257 257 267 267 267 267 267 267 270 270 270 270 270 270 270 270 270 270

275 275 275 275 275 275 275 280 280 280

Page 10

Custom Member serialization / deserialization .NET 1.x specific: Caveats using wsdl.exe .NET 1.x/VS.NET 2002/3 and the entity classes .NET 2.0 specific: Schema importers .NET 3.0 specific: Windows Communication Foundation (WCF) support SelfServicing DbUtils functionality Preface Functionality Connection strings Command timeouts ArithAbort flag (SqlServer only) DQE Compatibility mode(SqlServer only) Using the entity classes Preface Two classes Instantiating an existing entity Using the primary key value Using a related entity Lazy loading/load on demand Using a unique constraints value Using a prefetch path Using a collection class Using a Context object Creating a new / modifying an existing entity Modifying an entity Setting the EntityState to Fetched automatically after a save Saving entities recursively FK-PK synchronization Deleting an entity Polymorphic fetches Concurrency control Entities, NULL values and defaults Extending an entity by intercepting activity calls IDataErrorInfo implementation Using the entity collection classes Preface

280 280 280 280 291 292 292 292 292 292 292 292 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 294 310 310

Page 11

Entity retrieval into an entity collection object Using a related entity Using a prefetch path Using a collection object Using m:1 relations Using m:n relations Total control: GetMulti() Entity data manipulation using collection classes Updating a set of entities in the persistent storage Updating entities in a collection in memory Deleting one or more entities from the persistent storage Client-side sorting Finding entities inside a fetched EntityCollection Hierarchical projections of entity collections Example Tracking entity remove actions Using entityviews with entity collections Preface Creating an EntityView instance Filtering and sorting an EntityView .NET 2.0: Use a Predicate(Of T) or Lambda expression for a filter Multi-clause sorting Filtering using multiple predicates View behavior on collection changes Projecting data inside an EntityView on another data-structure Projection objects: EntityPropertyProjector Distinct projections Using the context Preface The Context class Using the Context class Retrieving instances from a Context Single entity fetches Prefetch Path fetches Entity Save calls Multi-entity activity

310 310 310 310 310 310 310 310 310 310 310 310 310 310 310 310 322 322 322 322 322 322 322 322 322 322 322 333 333 333 333 333 333 333 333 333

Page 12

Remarks Using TypedViews, TypedLists and Dynamic Lists Using the typed view classes Preface Instantiating and using a Typed View Instantiating and filling a Typed View Reading a value from a filled Typed View Limiting and sorting a typed view Using the typed list classes Preface Instantiating and using a Typed List Using dynamic lists Preface Creating dynamic lists Using GROUP BY and HAVING clauses Preface Using GroupByCollection and Having Clauses Calling a stored procedure Preface Retrieval Stored Procedure Calls Action Stored Procedure Calls Transaction support Wrap call in IRetrievalQuery object Fetching DataReaders and projections Preface Fetching a resultset as an open IDataReader Fetching a Retrieval Stored Procedure as an IDataReader Fetching a Dynamic List as an IDataReader Resultset projections Projecting Stored Procedure resultset onto entity collection Projecting Dynamic List resultset onto custom classes Filtering and Sorting Getting started with filtering Preface Upgrading from v1.0.200x.y: no PredicateFactory Predicates and Predicate expressions

333

337 337 337 337 337 337 340 340 340 342 342 342 344 344 344 347 347 347 347 347 347 349 349 349 349 349 349 349 349

356 356 356 356

Page 13

Creating and working with field objects Setting aliases, expressions and aggregates on fields What to include in a filter The predicate system Preface The predicate classes Native language filter construction FieldBetweenPredicate FieldCompareExpressionPredicate FieldCompareNullPredicate FieldCompareRangePredicate FieldCompareSetPredicate FieldCompareValuePredicate FieldFullTextSearchPredicate FieldLikePredicate AggregateSetPredicate DelegatePredicate MemberPredicate Advanced filter usage Preface Negative predicates Filtering on entity type Multi-entity filters Custom filters for EntityRelations Weak relations Advanced filtering Sorting Preface Upgrading from v1.0.200x.y: no SortClauseFactory Sorting Case-insensitive sorting Prefetch paths Preface Using Prefetch Paths, the basics Optimizing Prefetch Paths Polymorphic Prefetch Paths

356 356 356 361 361 361 361 361 361 361 361 361 361 361 361 361 361 361 375 375 375 375 375 375 375 375 381 381 381 381 381 383 383 383 383 383

Page 14

Multi-branched Prefetch Paths Advanced Prefetch Paths Single entity fetches and Prefetch Paths Prefetch Paths and Paging Excluding / Including fields for fetches Preface Fetching excluded fields in batches Entity fetch example Prefetch path example Transactions Preface Normal native database transactions Transaction savepoints COM transactions .NET 2.0: System.Transactions support Databinding at designtime and runtime Databinding with Windows Forms / ASP.NET 1.x Preface Implemented functionality Design time support VS.NET 2002/2003 Design time support VS.NET 2005 Databinding and inheritance Databinding with ASP.NET 2.0 Preface Getting started with the LLBLGenProDataSource2 control Caching of data Two way databinding LivePersistence and events Using the LLBLGenProDataSource2 control Intercepting activity The PerformWork event in an AJAX environment Filtering on the fly Trapping invalid input values Usage examples Example using LivePersistence Example using Perform Event handlers

383 383 383 383 390 390 390 390 390 393 393 393 393 393 393

404 404 404 404 404 404 407 407 407 407 407 407 407 407 407 407 407 407 407 407

Page 15

Setting values for insert/update using bound parameters Converting empty string values to NULLs for inserts / updates The SortingMode property Collection/typed list/typed view paging Preface Paging through an entity collection Get the total number of objects Paging through a TypedList or TypedView Get the total number of objects UnitOfWork and field data versioning Preface Unit of work usage Single entities Entity collections Stored procedures DeleteEntitiesDirectly and UpdateEntitiesDirectly Monitoring Specifying the order in which the actions are executed Field data versioning Linq to LLBLGen Pro Getting started Preface LinqMetaData SelfServicing: passing a Transaction instance Setting variables on the Linq provider ILLBLGenProQuery General usage Preface Aggregates Group by Order by Type filtering and casting Usage of 'as' in filters and projections Queryable: Contains String: Contains, StartsWith and EndsWith Paging through resultsets

407 407 407 417 417 417 417 417 417 419 419 419 419 419 419 419 419 419 419

424 424 424 424 424 424 429 429 429 429 429 429 429 429 429 429

Page 16

ElementAt / ElementAtOrDefault Using a Context Excluding / Including fields Hierarchical Sets Calling an in-memory method in the projection In-memory lambda expressions in projections Prefetch paths Preface Approach 1: WithPath and PathEdges Location of WithPath calls Specifying nodes Approach 2: WithPath and Lambda expressions Multiple nodes at the same level SubPaths Filtering, sorting, excluding/including fields, limiting Polymorphic prefetch paths Function mappings Preface FunctionMapping and FunctionMappingStore Calling unmapped .NET methods in a query Passing a custom FunctionMappingStore to the query Example of custom FunctionMapping usage Full-text search Supported default method / property mappings to functions Array methods / properties defined by System.Array Boolean methods / properties defined by System.Boolean Char methods / properties defined by System.Char Convert methods / properties defined by System.Convert DateTime methods/properties, defined by System.DateTime Decimal methods/properties, defined by System.Decimal String methods/properties, defined by System.String Object methods/properties, defined by System.Object Remarks and limitations Preface Queryable methods which are partly implemented Remarks on several extension methods and constructs

429 429 429 429 429 429 442 442 442 442 442 442 442 442 442 442 446 446 446 446 446 446 446 446 446 446 446 446 446 446 446 446 455 455 455 455

Page 17

Not supported Queryable extension methods / overloads Not supported constructs Handling exceptions Preface Exception strategy Custom exceptions Field expressions and aggregates Preface Aggregate functions Supported aggregate functions Aggregate functions in scalar queries Expressions Expressions in select lists Expressions in predicates Expressions in entity updates Scalar query expressions Calling a database function Preface Definition and scope Specifying constants for function parameters CASE support Function calls in expressions Examples Derived tables and dynamic relations Preface DerivedTableDefinition Targeting a DerivedTableDefinition for an entity fetch Using a DerivedTableDefinition DynamicRelation Setting up and using Dependency Injection Preface Specifying Dependency Injection information on a class Enabling Dependency Injection Info discovery Auto-discovery of instance types Manual discovery through dependencyInjectionInformation sections in the .config file Instance type example

455 455 463 463 463 463 465 465 465 465 465 465 465 465 465 465 475 475 475 475 475 475 475 481 481 481 481 481 481 489 489 489 489 489 489 489

Page 18

Dependency Injection scopes Defining two Dependency Injection scopes example Using nested scopes example Adding your own code to the generated classes Preface User code regions Defined user code regions .NET 2.0 and higher: partial classes Include templates Tutorial: adding code to an entity Using .lpt templates as include templates Validation per field or per entity Preface Build-in field validation logic Bypassing build-in validation logic Defining the Scale overflow correction action to use Validation logic inside entity classes Field validation Entity validation Validation logic inside validator classes Setting an entity's Validator Field validation Entity validation IDataErrorInfo implementation Setting up and using Authorization Preface Authorization basics Authorizable actions Location of your Authorization logic Authorization failures Authorizers Setting an Entity's Authorizer Authorizer examples Setting up and using Auditing Preface Auditable actions

489 489 489 496 496 496 496 496 496 496 496 505 505 505 505 505 505 505 505 505 505 505 505 505 512 512 512 512 512 512 512 512 512 521 521 521

Page 19

Location of your Auditing logic Auditors Setting an entity's Auditor Adapter: Auditor s in XML serialization scenarios Auto-persist recorded data Controlling transaction creation Auditor example Tapping into actions on entities and collections Preface Events Overridable methods XML support (serialization/deserialization) Preface Writing / reading object hierarchies to / from XML Verbose XML Compact / Compact25 XML Culture specific format specifications XML format descriptions Custom properties Preface Entity/TypedList/TypedView custom properties Entity/TypedList/TypedView field custom properties Application configuration through .config files Preface Dependency Injection settings Trace switch settings Culture specific format specifications Entity behavior settings Catalog name overwriting (SqlServer, Sybase ASE) Schema name overwriting (SqlServer, Oracle, DB2, PostgreSql, Sybase ASE, Sybase ASA) Trigger based sequence values (Oracle/Firebird) Ansi joins (Oracle only) DQE Compatibility mode (SqlServer only) Troubleshooting and debugging Preface Conventions

521 521 521 521 521 521 521 530 530 530 530 532 532 532 532 532 532 532 540 540 540 540 542 542 542 542 542 542 542 542 542 542 542 547 547 547

Page 20

Dynamic Query Engine tracing Info level tracing Verbose level tracing ORM Support classes tracing Info level tracing Verbose level tracing Linq to LLBLGen Pro tracing .NET 2.0 / VS.NET 2005 specific: Debugger visualizers Tutorials and examples How do I ... ? Preface Examples in the documentation Using the designer Using the generated code, SelfServicing specific Using the generated code, Adapter specific Using the generated code, general Using the generated code, Linq to LLBLGen Pro Various How do I? examples Tutorial - Create an LLBLGen Pro project Preface Steps to create a project Creating project elements Mapping entities Mapping a typed view Mapping a stored procedure call Tutorial - Generating code Preface Generating sourcecode Tutorial - Compiling the generated code and setting up the solution Preface Adapter: Setting up the Solution SelfServicing: Setting up the Solution Tutorial - Adapter: Adding code to the console application Preface Using the generated code Setting up the using / Imports statements

547 547 547 547 547 547 547 547

553 553 553 553 553 553 553 553 553 579 579 579 579 579 579 579 581 581 581 582 582 582 582 584 584 584 584

Page 21

Using Entities Using Typed Views Using Retrieval Stored Procedure Calls Tutorial - SelfServicing: Adding code to the console application Preface Using the generated code Setting up the using / Imports statements Using Entities Using Typed Views Using Retrieval Stored Procedure Calls Best practises Database best practises Designer best practises Generated code best practises Miscellaneous About LLBLGen Pro Online support site

584 584 584 590 590 590 590 590 590 590 596 597 599 601 603 604

Page 22

The next-gen O/R mapper - code generator for .NET

Designer version: 2.6 Runtime libraries version: 2.6
©2002-2008 Solutions Design. All rights reserved. http://www.llblgen.com

Page 23

Welcome to the LLBLGen Pro documentation!
Preface
To fully understand and take advantage of the elements in this documentation, it's key to understand the conventions we've used when writing the documentation, for example the order in which everything is laid out, the various .NET versions etc.

Order of elements
The LLBLGen Pro documentation is ordered in such a way that elements which are necessary to know first are also described first. That's why the Concepts Section is the first section you will encounter, as in there the various aspects of the system are described so later on you won't wonder what certain terms mean. After you've familiarized yourself with the topics described in Concepts, you can move on to the Designer where you learn how to create a project and generate code. After you've successfully generated code, you can move to the section which describes this generated code in detail and how to use it in full.

The various .NET versions and this documentation.
At the time of publishing this documentation, the following .NET versions were known: 1.0, 1.1, 2.0, 3.0 and 3.5. .NET 3.0 and .NET 3.5 are actually .NET 2.0 with WPF/WCF and WF (added in .NET 3.0) and Linq oriented assemblies (added to .NET 3.5) as additional features so when .NET 2.0 is mentioned in the documentation, you can also read: .NET 3.0 or .NET 3.5, in short: .NET 3.x. When .NET 1.x is mentioned, it means .NET 1.0 and .NET 1.1. Some features are only for .NET 2.0 and should be ignored for .NET 3.x, which is the case with the SchemaImporter feature for Webservice stubs for .NET 2.0, which can be ignored if you use WCF in .NET 3.x. In these cases, it's explicitly stated that a feature is only for .NET 2.0 and not for higher versions. In all other cases, you can read .NET 3.x when .NET 2.0 is mentioned. Some features are .NET 3.5 specific, like Linq related features. In those situations, the explicit version '.NET 3.5' is mentioned. If a version is mentioned as .NET x.y+, for example .NET 2.0+, it means that all versions starting with 2.0 are meant, so 2.0, 3.0, 3.5 etc. Not all code is ported to all .NET versions. Sometimes you'll see code snippets which are .NET 2.0+ using generics or sometimes you'll see codesnippets which work on all .NET versions as they're formulated without generics but could be written differently in .NET 2.0+ with generics. The main element in this is the EntityCollection<T> class which is an Adapter specific class and which is used as a generic class in .NET 2.0+. So when you see a non-generic EntityCollection class mentioned in a code-snippet, you can use a generic variant in .NET 2.0+, except of course when you're sending the entity collection as a result from a web-method, as webservices don't support generics.

Navigation
We've added links to sub-sections within a section to the Contents tree. This results in [+] icons in front of documents. When you click these open you'll see sub-links to parts of the particular document so you can quickly browse in the contents tree where you want to go to.

Code fragments in panels and tabs
This documentation has its code snippets grouped in tabs: a snippet which is given in various languages has per language a tab, as shown in the following example:
C# VB.NET

// C# public class DeepThought {

Page 24

public int GetAnswer() { return 42; } } ' VB.NET Public Class DeepThought Public Function GetAnswer() As Integer Return 42 End Function End Class Sometimes code is grouped in collapsible panels to make it less cumbersome to read the texts if you're not interested in one or more of the examples given. Below is an example of such a panel. To expand it, simply click the header.

Click me to expand / collapse
// C# public class Marvin { public void BecomeHappy() { throw new NotImplementedException("Not that anyone cares what I say, but the Restauran t is on the other end of the universe"); } }

LLBLGen Pro Reference Manual
This manual doesn't contain the LLBLGen Pro Reference manual, which is a separate .chm file (or set of files if you want to integrate the reference manual into VS.NET help.). It's highly recommended to regularly check the reference manual for method overloads, properties of classes and other specifics of the discussed methods and features in this manual. Not every tiny little detail is described in this manual and the reference manual is invaluable in that area. Enjoy your exploration of the LLBLGen Pro documentation! The LLBLGen Pro Team.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 25

What's new/changed in LLBLGen Pro v2.6?
Below you'll find a list of changes, additions and important fixes in the 2.6 release. More things have been fixed, but these were generally small, or were fixed by a new piece of functionality. It's recommended you also browse the Migrating your code section for breaking changes.

End of life for .NET 1.x
The .NET 1.x frameworks are at the end of their lifespan. In v2.6 we still support .NET 1.0 and .NET 1.1, but the new features and changes made to the framework and documentation were largely focussed on .NET 2.0+ and often changes weren't backported to .NET 1.x. We recommend that you migrate to .NET 2.0 or higher if possible.

New functionality / changes
General
Full Linq support with our own Linq to LLBLGen Pro provider. This provider allows you to fully use Linq in C# or VB.NET on .NET 3.5 to query your database, using Adapter or SelfServicing. We've spend a lot of time to implement as much Linq functionality as possible and have added extra extension methods to use all the LLBLGen Pro features through Linq constructs, like prefetch paths and exclude/include fields. Read more... .NET 3.5 support. With code changes in the runtime so it works better with Linq to Objects and with VS.NET 2008 project templates SqlServer CE 3.5 support. (compact framework and desktop). Derived table support. It's now possible to specify a complete query as a FROM clause or as a side of a relation. This feature requires support for derived tables in the database. The databases which don't support this are: Firebird 1.x and SqlServer CE 3.1 or lower. Read more... Dynamic relation support. It's now possible to add a relation object in code between any two entities without a lot of effort. It's also possible to add a relation between a derived table and an entity or between two derived tables. Read more... .Net 2.0+: Much lower memory consumption during transactions. When entities are saved, their state is preserved during the transaction they participate in so if a transaction is rolled back, the entity state is rolled back to the state before it started participating in the transaction. This consumed a lot of memory overhead in previous versions (60% of the memory taken by an entity was necessary for this state preservation). This has been brought down with 90%. (example: 500 test entities in memory occupy 1.6MB of memory. In v2.5, after save they occupy 2.4MB of memory during the transaction. In v2.6, 1.7MB, just 100KB more.) .NET 2.0+: EntityField/EntityField2 consume up till 20% less memory. Combined with the transaction memory reduction, the memory consumption of a set of entities which participate in a transaction right before commit of that transaction is lower than the memory the entities occupied in v2.5 without being in a transaction. .NET 2.0+: SqlServer CE Desktop is now easier to use. One can generate normal SqlServer code when SqlServer CE Desktop (CE 3.1 or higher) is used, as the DQE and templates work with the DbDataProviderFactory and have two new compatibility modes: SqlServerCE3x and SqlServerCE3.5. When deploying a CE Desktop application, be sure to add the DbDataProvider factory registration to your application's .config file. See CE Desktop's documentation for details about that. It's now possible to simply switch between CE Desktop and SqlServer server due to the change of a connection string and the compatibility mode setting (of course the schemas have to match). Read more... Support for CF.NET 3.5. LLBLGen Pro generated code is now supported on CF.NET 3.5 (SqlServer targeting projects only)

Runtime Libraries

Page 26

Added
DerivedTableDefinition class. A new class has been added, DerivedTableDefinition, which is used to specify a complete query as a FROM clause. It's the final piece of the puzzle to create any SQL query though LLBLGen Pro's query api. Read more... .NET 2.0+: EntityView/EntityView2 classes now have constructors which accept a Func<T, bool>. It's now possible to create an entity view from an entity collection by passing to its CTor a func instead of a predicate. (on .NET 2.0, you can use an instance of the .NET framework class Predicate<T>) Example (.NET 3.5 code): EntityView2<CustomerEntity> customersFromGermany = new EntityView2<CustomerEntity>(customers, c=>c.Country=="Germany"); .NET 2.0+: A generic DelegatePredicate<T> class. For in-memory filtering, a new generic variant of the DelegatePredicate class has been added. This generic variant accepts a Predicate<T>, which can be defined in .NET 3.5 with a Func<T, bool>. This way, you can define in-memory filtering using predicates where necessary using the DelegatePredicate<T> class. .NET 2.0+: String uniquing. Per DataAccessAdapter and DaoBase instance, a string cache is used to unique string value instances. When queries return multiple instances of the same string, each instance takes up memory, even though the value of the string is the same. The fetches of entities and projections are now using that cache, so if you fetch a projection (or use a Linq query which returns an anonymous type) and a string value is returned multiple times in the query, the string is actually in memory just once, and all objects reference that same instance. This is ok, as strings are immutable anyway. Strings aren’t ‘interned’ though. The cache lives as long as the DataAccessAdapter / Dao instance lives. This is done to prevent having a big string cache building up over time. The string uniquing is done using UniqueList<T> and takes almost no performance overhead. In Adapter you can pass a string cache to the DataAccessAdapter by setting the protected property StringCacheForFetcher in a method you add to DataAccessAdapter. .NET 2.0+: Utility routines to create empty dataset and datatable from type or hierarchy for projections. A new utilty class, GeneralUtils has a couple of routines to produce an empty dataset from a prefetch path or an empty datatable from an EntityType value or factory. These routines can be used in combination of hierarchical projections of entity collections to datasets. .NET 2.0+: ExcludeFieldsList and IncludeFieldsList are now added as helper classes. The class ExcludeIncludeFieldsList is a bit confusing as what it contains depends on a flag. Two helper classes are added which derive from ExcludeIncludeFieldsList (so are usable whenever ExcludeIncludeFieldsList is accepted) and which represent fields to Exclude (ExcludeFieldsList) or to include (IncludeFieldsList). The classes are empty except they set the excludeContainedFields flag to true (ExcludeFieldsList) or false (IncludeFieldsList) automatically. .NET 2.0+: Setting to let the framework produce case-insensitive based hashcodes for string values in entity fields. Through a new static property on the new class EntityFieldCore: EntityFieldCore.CaseSensitiveStringHashCodes, a developer can control if the GetHashCode() method for a string typed entity field will produce a hashcode based on the case sensitive version of the string value or the case insensitive version. Default is true: case sensitive hashcodes. Set this setting to false if you work with a database which has casing differences between FK and PK values. You can also set this setting in the config file by adding a new key / value pair to the appSettings element: caseSensitiveStringHashCodes, which has as values either 'true' or 'false'. New EntityField/EntityField2 constructors to target derived tables. It's now possible to target a derived table with a field, by using a new constructor on EntityField or EntityField2, which accepts two strings and a type. .NET 3.5: New tracer: LinqExpressionHandler. A new tracer has been added for the Linq provider. Set it to 3 to obtain the Expression tree to evaluate in textual form (recommended for tracing linq queries) or to 4 to get a very verbose list of all expression handler methods visited. Only set this tracer to level 4 if you need to, as the output is very verbose. .NET 2.0+: QueryApiObjectTraverser class. This class is the base class for various visitors which can be used to traverse a graph of elements, like predicate expressions, relations in a relation collection etc. and to obtain information from the graph while traversing. .NET 2.0+: DataSourceCacheLocation.None. The datasource controls now have an option to disable caching of the fetched data. This can be useful if the data is used in a readonly fashion. State of the control (except the data) is cached in the viewstate.

Changed
.NET 2.0+: IProjector now has a new flag, SetUsingCTorHint, which hints the projector engine how to set the destination element: via a CTor parameter (true) or otherwise, e.g. via a property setter action.

Page 27

This is a hint flag which is ignored in some projector engines, like the DataProjectorToDataTable and DataProjectorToEntityCollection, which always fill the destination elements via other ways than ctors. DataProjectorToValueList also ignores this flag as it's not applicable, the destination is a single value, there's no ctor to call nor property to set. Important: Projectors for CTors have to appear in the same order as their destination parameters appear in the CTor to use. .NET 2.0+: IEnumerable and IEnumerable<T> implemented on various classes and in methods. To support Linq to Objects and to make traversing some collection based classes a bit easier, IEnumerable<T> and IEnumerable have been implemented on more classes to make it possible to use them in .NET 3.5 more easier. .NET 2.0+: ‘HasErrors’ and ‘RowError’ columns are now filtered out in design mode of DataSourceControls. TypedLists/views always had 2 extra columns in design mode of the LLBLGenprodatasourcecontrols: “RowError” and “HasErrors”. These two columns are now filtered out. .NET 2.0+: Adapter: RemovedEntitiesTracker is now serialized into xml/deserialized from xml. When a collection had a RemovedEntitiesTracker set and it was serialized over XML, the removedentitiestracker wasn’t serialized into the XML. This has been changed: the RemovedEntitiesTracker is now serialized with contents and deserialized back at the other end. The deserialized collection is of type EntityCollectionNonGeneric. Use the IEntityCollection2 interfaces to use the deserialized collection in your code. The collection isn’t serialized nor deserialized in selfservicing, as selfservicing isn’t recommended in distributed scenarios. .NET 2.0+: Adapter: FastSerializer/Deserializer now have a CTor which accepts an open reader/writer. FastSerializer/Deserializer can now be instantiated with an open reader/writer to serialize own data more easier. .NET 2.0+: ObjectGraphUtils.ProduceCollectionsPerTypeFromGraph now has an overload which accepts an array of collections. This overload can be used to merge graphs defined by different entity collection instances into one set of collections with entities of the same type stored per collection, effectively merging all collections together. .NET 2.0+: SqlServer now uses a parameterized TOP clause if in SqlServer 2005 mode. When the DQE is set into SqlServer2005 mode (mode for SqlServer 2005 and up), it will emit TOP(@param) instead of TOP(number), which could lead to more re-uses of execution plans and thus faster queries. .NET 2.0+: IEntityFields/IEntityFields2 now implement IList. The IEntityFields/IEntityFields2 interfaces (and the EntityFields/EntityFields2 classes) are now usable in IList scenario’s without casting. Not all methods are implemented (See the reference manual for details). .NET 2.0+: CreateHierarchicalProjection now re-uses DataTable instances. If a passed in DataSet in the projection routine to DataSet, contains a DataTable with the name of the entity it will be used instead of a new DataTable (though it will be cleared from data). This way it is possible to pass a pre-fab DataSet created with the utility routines for dataset creation which is then filled with projected data. .NET 2.0+: Typed lists and typed views now implement IEnumerable<T>. Typedlist/view classes now implement IEnumerable<T> where T is the row type of the typedlist/view. This allows developers to use Linq over a typedlist/view in memory. .NET 2.0+: m:1/1:1 properties now raise PropertyChanged event when changed. When you for example set the property OrderEntity.Customer to a new CustomerEntity instance, LLBLGen Pro will now raise the event PropertyChanged for 'Customer'. .NET 2.0+: When PK fields are synced with FK fields, all FK fields now trigger a PropertyChanged event. In previous versions, the syncing of a PK with an FK resulted in 1 PropertyChanged event on the FK side, even if the PK and FK contained multiple fields. In v2.6, for every field in the FK, a PropertyChanged event is raised. .NET 2.0+: Unit of work classes now unwrap TargetInvocationExceptions. The calls to directly delete/update entities or delegate calls to procs by the Unit of work classes can involve Invoke calls. If such an operation results in an exception, this exception is wrapped inside a TargetInvocationException. The UnitOfWork(2) classes now unwrap the inner exception and rethrow that exception instead of the TargetInvocationException. .NET 2.0+: DQEs are more clever when determining if DISTINCT is required. To efficiently and correctly fetch data, it sometimes is necessary to assure unique rows. The typical way to do so is adding 'DISTINCT' to the result query. The downside is that 'DISTINCT' adds a set of restrictions to the query which might cause a different, less efficient overall data fetch experience. The DQEs are now better capable of determining if 'DISTINCT' is required or not: if no duplicate rows are expected, DISTINCT can be omitted and therefore the restrictions of DISTINCT can be avoided. .NET 2.0+: Entity collections now override ToString(). An entity collection now overrides 'ToString' and returns the entity type it is set to and the number of elements in the collection. This is handy when debugging code. .NET 2.0+: When a property which represents a 1:1/m:1 relation is changed, PropertyChanged is raised. When you change for example myOrder.Customer to a new customer, the framework now raises the PropertyChanged event for 'Customer'.

Page 28

.NET 2.0+: SqlServer CE 3.x targeting DQEs now emit named parameters. SqlServer CE 3.x supports named parameters instead of anonymous parameters with '?'. The DQEs targeting SqlServer CE 3.x databases have been enhanced to emit named parameters now so the issues with anonymous parameters and SqlServer CE 3.5 are avoided. DbFunctionCall now doesn't ignore schema or catalog names specified anymore when the function name is a pre-formatted string. This doesn't normally lead to breaking code as the names were ignored previously, but just in case you passed the names, they're now having effect. .NET 2.0+: CancelEdit() now raises PropertyChanged events. When CancelEdit() is called (either by manual code or by a bound control) on an entity, for every field which was changed during the edit cycle, PropertyChanged is now raised. This makes bound controls reflect the reset values for the fields changed, as the fields were reset to their values prior the edit cycle was started. .NET 2.0+: TypedListBase, TypedListBase2 and TypedViewBase are now generic classes. .NET 2.0+: The default culture used for XML serialization is now set to the invariant culture. This culture is controlled by the XmlHelper.CultureNameForXmlValueConversion property or the .config file setting cultureNameForXmlValueConversion. Previously it was set to the current culture of the executing thread. If you want to use a different culture you’ve to set the property or the config setting. SelfServicing: Before CheckIfCurrentFieldValueIsNull is doing its checking the entity now first refetches itself if required (e.g. when it's out of sync after a save).

Drivers
Added
SqlServer 2008 support. The 4 new types in SqlServer are now supported in the driver and runtime lib/templates. Sybase ASE driver now allows to set the number of return parameters for stored procedures through the designer UI, similar to the SqlServer driver. Enabled when the selection which stored procedures to retrieve is set to manual.

Changed
Oracle drivers using ODP.NET now throw out stored procedures which have at least 1 parameter and/or return value of the type BOOLEAN. This is because ODP.NET can't deal with BOOLEAN typed parameters in stored procedures.

Designer
Added
Extra sync option: SyncRenamedFieldElementsAfterRefresh. This option controls whether entity fields and fields mapped onto relations are synced with the database parts if SyncMappedElementNamesAfterRefresh is set to true: if the fields are manually renamed, and SyncRenamedFieldElementsAfterRefresh is set to false, the names of these fields aren’t synced. As manual renaming is tracked starting in v2.6 for entity fields and fields mapped onto relations, this feature only has effect when a field is manually renamed in v2.6. Option to disable hide of Browsable(false) attributes. The preference setting (and project property) HideManyOneToOneRelatedEntityPropertiesFromDataBinding, default true, controls whether Browsable(false) attributes are generated onto properties which represent fields mapped onto 1:1 or m:1 relations. If this option is set to false, these properties will show up in databinding scenario’s which could cause problems, hence the default: true. The designer now checks if the file to save the project to has been changed after it was loaded. When you load a project file F into the designer, and someone else changes that file after that, and then you try to save F, you'll now get a popup warning you that you'll overwrite the changes of someone else as the file you've been working on has been altered after you've loaded it into the designer.

Changed
SqlServerAutoDetermineSProcType is now called ‘AutoDetermineSProcType’. The setting is renamed because it’s now also used by Sybase ASE. Preference/project settings don't break, they're migrated to the new setting. When a type converter class couldn't be loaded, the rest of the type converters in the assembly are still examined. In previous versions, when a TypeConverter derived class was found in an assembly and it couldn't be loaded, the complete assembly was ignored. In v2.6, the rest of the type

Page 29

converters are still examined, only the TypeConverter which couldn't be loaded is ignored.

Task performers
Changed
The ProjectFileCreator task performer now works with <[RuntimeLibraryHintPathxy]>, where xy is the .NET version to target, e.g. 20. When a project is re-generated into an existing directory in which the code of a previous generation cycle is located, the cs/vbproj file(s) won’t get their rootnamespace set again. Initially when a cs/vbproj file is generated, the rootnamespace is set, and in previous versions of LLBLGen Pro, the rootnamespace name was overwritten also if the cs/vbproj file was altered in subsequential code geneneration cycles. This overwriting has been removed: if you alter the namespace after the first code generation cycle, it won’t be reset. Normally you won’t notice this unless you add classes to the generated code yourself.

Templates
Added
.NET 2.0+, Selfservicing: Entities now have AlreadyFetchedpropertyName properties. These properties return the state of the already fetched flag which is used to determine if selfservicing should take place. If the property is set to false while the property has already been fetched, the related data is cleared (e.g. customer.AlreadyFetchedOrders = false; means customer.Orders.Clear()) DynamicRelation class. A new class has been added, DynamicRelation, which makes it easy to create relations in code between non-related entities, entities and derived tables or between derived tables. A DynamicRelation is also required to add a derived table to a query, by setting its left side to a derived table and then adding the DynamicRelation instance to a RelationCollection. .NET 2.0+: IElementCreator/IElementCreator2 implementing classes. These classes are located in the FactoryClasses namespace and can be used to create various template group specific elements, like factories and dynamic relations. The implementation is mainly used by the Linq provider but can be handy in other situations as well, hence it’s added to the .NET 2.0+ batch of code (which is used by the .NET 3.5 targeting code)

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 30

Migrating your code
Preface
With every release of the runtime libraries, designer and accompanying template sets new features are added and changes are made to fix bugs or design flaws. While we do everything we can to avoid breakage of your code, some changes are unavoidable. This section illustrates some important points to consider when you move your own code using the LLBLGen Pro generated code to the v2.6 templates and updated runtime libraries (version number: 2.6.0.0). Before you proceed, please read the following important notice:

Important : Please make backup copies of your work before migrating to v2.6. That is: make backups of your own code and the generated code and the .lgp file before using LLBLGen Pro v2.6. This way you can roll back to your original work and continue working with 2.5 or earlier versions if you decide that migration to v2.6 requires more work than you anticipated and you have to complete it at a later date.

Important : If you're generating new v2.6 code on top of a project generated with v2.5 or v2.0, it can be that when you load the project(s) into VS.NET, the references to the runtime libraries are still pointing to the v2.5 runtime libraries or v2.0 runtime libraries. Please make sure you manually update these references to the v2.6 runtime libraries, and if applicable, add the reference to the LinqSupportClasses. See for more details about referencing the runtime libraries the section Compiling your code .

Furthermore, it's key to regenerate the code and also to check if you're indeed referencing the proper runtime libraries. Below are the list of breaking changes in every 2.x version, starting with the latest version, v2.6. If you're migrating from 1.0.200x.y to v2.6, be sure you read all the breaking changes on this page. If you're migrating from v2.0 to v2.6, you need to check the breaking changes in both the v2.5 and v2.6 sections. If you're migrating from v2.5, you've to check the v2.6 section only. If you run into a breaking change which isn't enlisted here, please let us know so we can add it here.

Migrating generated code from version v2.5 to v2.6 runtime libraries
Please consult the What's new? page for breaking changes. We've also enlisted them below for convenience. In general they're minor but you should nevertheless read the list and check whether they affect your code. A lot of the breaking changes are made on the .NET 2.0+ code only. This is due to the fact that the .NET 1.x codebase has reached its end of life (as well as the CF.NET 1.x code). LLBLGen Pro v2.6 does ship with .NET 1.x code support, but not a lot of the changes in v2.6 are made to that codebase as well. You're encouraged to upgrade to at least .NET 2.0.

Breaking changes v2.6

Page 31

Runtime libraries
Due to heavy refactoring, some of the protected methods of DaoBase and DataAccessAdapter which were used internally for for example prefetch path execution have been moved to another class, PersistenceCore and made internal static. If your code relies on these methods, please rewrite your code so they don't rely on these methods anymore. This and other refactorings were performed to remove clones inside the codebase and to make the code more maintainable. RelationCollection and IRelationCollection’s indexer now returns an IRelation instead of IEntityRelation . This indexer (Item in VB.NET) returns either an IEntityRelation or an IDynamicRelation object, which derive from IRelation SelfServicing : in prefetch paths, the intermediate entity in m:n relations is now aliased with the same pattern as the GetMultiManyToManyField method uses: Entityname_ . Example: "Order_". Previously this was an empty string. Adapter already uses this pattern. .NET 2.0+ : IEntityFactory(2).CreateHierarchyRelations() isn't virtual anymore. Instead, override the new IEntityFactory(2).CreateHierarchyRelations(string) method. .NET 2.0+ : IEntityFactory2.GetEntityTypeFilter(bool) isn't virtual anymore. Instead, override the new IEntityFactory2.GetEntityTypeFilter(bool, string). DbFunctionCall now doesn't ignore schema or catalog names specified anymore when the function name is a pre-formatted string. This doesn't normally lead to breaking code as the names were ignored previously, but just in case you passed the names, they're now having effect. IProjection has new members, which you have to implement in your code if you implemented IProjection previously. IPredicate now has 2 new interface members: GetFrameworkElementsInPredicate () (method) and ObjectAlias (property). Custom predicate implementation have to make sure these members return the correct values with respect to the custom predicate. ObjectAlias is now implemented on Predicate, the abstract base class of all predicates. If your custom predicate inherits from this class, you can remove the ObjectAlias property from your predicate. GetFrameworkElementsInPredicate returns a list of objects, which are all LLBLGen Pro framework elements, so entity fields, expression objects and the like, contained inside the predicate. .NET 2.0+ : IEntityField.ToXml has been removed. .NET 2.0+ : EntityField and EntityField2 have been refactored to use a shared base class, EntityFieldCore. This could lead to deserialization problems if you deserialize binary serialized predicates back which are serialized using older runtime versions .NET 2.0+ : EntityField/EntityField2 now consume less memory (-20%) as elements which aren’t used with entities are factored into separate classes/objects. This brings down memory consumption in fields, though could lead to errors during deserialization if you deserialize binary serialized predicates back which are serialized using older runtime versions .NET 2.0+, SqlServer, CE : On .NET 2.0+, SqlServer CE 3.0 or higher is supported, as the DQE uses named parameters in the queries. If you're using an older version of SqlServer CE, you've to upgrade to that newer version, recommended is v3.5 or higher. .NET 2.0+, SqlServer, CE Desktop: LLBLGen Pro now uses the normal SqlServer DQE for generating queries for SqlServer CE Desktop, so migrating an existing CE Desktop targeting VS.NET project to v2.6 requires that you change the references to the SqlServer DQE. You also have to set the SqlServerCompatibilityLevel in your application's .config file to 3 for SqlServerCE 3.x and 4 for SqlServerCE 3.5 If you’re using your own VS.NET templates, the <[RuntimeLibraryHintPath]> token is no longer used. You’ve to append the .NET version number, e.g.: <[RuntimeLibraryHintPath20]> for .NET 2.0 runtime libraries. If you add the same entity object twice to a collection, which is contained inside an entity, e.g. you add myOrder twice to myCustomer.Orders, it is now added twice and not as before, added once. If you set the property DoNotPerformAddIfPresent to true on the collection, the entity is added once. DoNotPerformAddIfPresent is false by default. This change is caused by the fact that if you set a property mapped onto a relation to the same object (e.g. myOrder.Customer = myCustomer; where myOrder.Customer was already set to myCustomer), it’s no longer desynced and re-synced, the set is a simple no-op, as nothing has to be done. This means that if you add myOrder twice to myCustomer.Orders, it’s no longer first desynced from myCustomer, and therefore it’s not removed from the collection before it’s added. Normally this isn’t a problem, only when you tend to add entities multiple times to a collection, have DoNotPerformAddIfPresent set to false and rely on the # of entities in the collection. To work around this: either set the flag DoNotPerformAddIfPresent to true or prevent adding the same entity twice, if you really have to rely on the # of unique entities in a collection. .NET 2.0+ : When CancelEdit() is called (either by manual code or by a bound control) on an entity, for every field which was changed during the edit cycle, PropertyChanged is now raised. This makes bound controls reflect the reset values for the fields changed, as the fields were reset to their values prior the edit cycle was started. IViewProjectionData doesn’t have a method CreateDataRelations anymore. This method has been

Page 32

moved to the new GeneralUtils class as a static method. Hierarchical projections from entity collections to DataSet don’t clear the DataSet’s Tables collection anymore, so if you pass a DataSet with tables, they won’t be removed. .NET 2.0+ : TypedListBase, TypedListBase2 and TypedViewBase are now generic classes. When a project is re-generated into an existing directory in which the code of a previous generation cycle is located, the cs/vbproj file(s) won’t get their rootnamespace set again. Initially when a cs/vbproj file is generated, the rootnamespace is set, and in previous versions of LLBLGen Pro, the rootnamespace name was overwritten also if the cs/vbproj file was altered in subsequential code geneneration cycles. This overwriting has been removed: if you alter the namespace after the first code generation cycle, it won’t be reset. Normally you won’t notice this unless you add classes to the generated code yourself. The default culture used for XML serialization is now set to the invariant culture. This culture is controlled by the XmlHelper.CultureNameForXmlValueConversion property or the .config file setting cultureNameForXmlValueConversion . Previously it was set to the current culture of the executing thread. If you want to use a different culture you’ve to set the property or the config setting. The TDL statements <[IsForeignKey]> and <[If IsForeignKey]> now result in true if the relation is still visible but the field mapped onto the relation is hidden. This was previously not the case: if the field mapped onto the relation was hidden but the relation was still there, these statements would result in false. Oracle drivers targeting ODP.NET will now throw out stored procedures with parameters or a returnvalue of type BOOLEAN. This is because ODP.NET can't deal with BOOLEAN typed parameters on procs, so allowing them will result in runtime exceptions. .NET 2.0+ : EntityBase/EntityBase2 no longer use the method called FlagAllFieldsAsChanged . This method was used by RollbackFields to simply raise changed events for all fields in the entity. However this is inefficient if a low number of none fields were actually changed. This has been fixed internally. So relying on the call to FlagAllFieldsAsChanged from RollbackFields is no longer going to work: the method is no longer called. The method is still there if you want to use it to flag all fields as changed. .NET 2.0+ : EntityBase/EntityBase2 no longer raise for all fields in an entity a changed event if RollbackFields is called: they now only raise a changed event for fields which were changed due to the rollback.

Templates, generated code.
.NET 2.0+, SqlServer : The SqlServer templates and DQE now use the DbProviderFactories class to produce connections, commands and the like based on the compatibility setting inside the DQE. This has the advantage that you can switch between SqlServer server and SqlServer CE Desktop without regenerating/recompiling any code: just change a config setting in the .config file of your application and by changing the connection string. It is however a breaking change because the CE Desktop-using developers have to use different templates: the SqlServer CE templates aren't available anymore on the .NET 2.0/3.x+ platforms. Furthermore, referencing SqlServerCe dll isn't required anymore, as long as the provider is registered in the .config file of the application or in the machine.config file of the .NET 2.0 framework. See the SqlServer CE documentation and the MSDN documentation for details about DbProviderFactories. NOTE: if you use views, stored procedures and/or types in your sqlserver database which aren't supported in the SqlServer CE Desktop version you're using, you'll get errors at runtime. Please check this up front.

Breaking changes v2.5
Runtime libraries
Build-in precision/scale checks for numeric values. These checks are performed automatically (unless you turn off automatic checking) to prevent overflows in the database. For example if you define a field of type decimal in your database, with precision 10 and scale 2, and you try to store a value 1234567.123 into this field, you'll get an exception that the value will cause an overflow as the precision/scale of the value exceed the precision and scale of the field. Due to these automatic checks, developers don't have to write validators for every numeric field to see if there is a possible overflow. This feature can break existing applications at runtime because suddenly values like 10.500 are rejected while they were acceptable in previous versions. To overcome this, a global static (shared) property on the EntityBase(2) classes, called ScaleOverflowCorrectionActionToUse is added which accepts one of the 3 different values (None ( throw exception on scale overflow), Truncate (default, which truncates the fraction to fit the scale) or Round (which rounds the fraction using Math.Round)). This property is also settable using a config file setting by adding a line to appSettings in the config file of your application with as key scaleOverflowCorrectionActionToUse and as value 0, 1, or 2, which represent None, Truncate or Round.

Page 33

.NET 2.0/3.0 specific: every entity class (except subtypes of course, which derive from their supertype's class) now derives from the new CommonEntityBase class, which is a generated partial class which is placed between the generated entity class and EntityBase(2). If you're using your own preset , you have to add a new task to the preset to make your code work: the CommonEntityBase class has to be generated. All standard presets shipped with LLBLGen Pro already have the CommonEntityBase task added to them, this requirement is solely for custom presets, which also means modified standard presets which have been saved under a different name. To generate the CommonEntityBase class automatically using your custom preset, you've to add a new task to that preset: please load a project and press F7 to open the generator configuration dialog. Go to tab 3 and select the preset to alter. In the run queue, select the task which generates the entities (EntityClassesGenerator task) and click Add Tasks. For adapter, please check the checkbox in front of the task SD.Tasks.Adapter.CommonEntityBaseGenerator and click OK. For Selfservicing, please check the checkbox in front of the task SD.Tasks.SelfServicing.CommonEntityBaseGenerator and click OK. The CommonEntityBase task is now added to your preset and you should now save the preset by clicking 'Save' at the top of the dialog. ScalarQueryExpression no longer forces a TOP 1 (or equivalent on databases without TOP, like Oracle) clause in the query. TOP 1 severily slowed down execution time and it also gave errors on Oracle if the ScalarQueryExpression was correlated. The TOP 1 clause is still enforceable, a new overload to the CTor has been added which accepts a boolean for forceRowLimit (default: false). If you set this parameter to true, you'll get a TOP 1 (or equivalent) into the scalar query produced. You can also use the property ForceRowLimit to set this flag. Every entity has now a method called CreateConcurrencyPredicateFactory. This could lead to code become incompilable if you have added this method yourself. If this is the case, simply make your method override the base class' method. This method is called at entity construction time to produce a valid ConcurrencyPredicateFactory object which is then set as the entity's ConcurrencyPredicateFactoryToUse The entity method SetNewFieldValue(4) (adapter: SetNewFieldValue(3) (the protected overload) is now obsolete, it's been replaced by the SetValue(3) method. Normally you won't run into this, but if you use the protected overload please update your code to use SetValue instead. The virtual overload of the method SetNewFieldValue is no longer virtual. If you did override this method, please override the OnSetValue method or the OnSetValueComplete method instead. The normal public SetNewFieldValue(2) method is still available and not marked as obsolete. The UnitOfWork2 object now auto-commits the transaction it starts itself if the passed in adapter doesn't have a running transaction active. If you relied on the fact that the transaction started by the UnitOfWork2 object wasn't autocommitted for you, you should change your calls to UnitOfWork2.Commit(adapter, autoCommit) and pass 'false' for autoCommit. AcceptChanges/RejectChanges methods in EntityBase(2)/EntityFields(2)/EntityField(2) objects are now either internal or removed. If you were using these methods in your code, please change your code to use EntityBase(2).SaveFields/RollbackFields. UnitOfWork(2).Commit now returns an integer which reflects the # of entities affected by the total set of actions performed. This could break overrides of Commit(2), as that method previously had a void return type. Better reporting of the # of entities saved when saving collections. This could affect unittests which will now perhaps fail because the # returned isn't the queue length anymore but the actual number of entities saved. If a nullable field is set to the value null / Nothing, the entity will still call the validator object set in the entity for further validation instead of simply returning true and bypassing the validator as v2.0 and earlier versions did. In the DataAccessAdapterBase class the following methods have been refactored and now only one overload is virtual instead of all of the overloads. This could break code. Please override the virtual version instead. FetchEntity FetchEntityUsingUniqueConstraint FetchNewEntity FetchEntityCollection DeleteEntity In the DaoBase class, the following methods have been refactored and have lost their 'virtual' keyword or have now just 1 overload which is virtual, or have received a new parameter. Please update your code to meet the new signatures or override a different overload. FetchExisting ExecuteSingleRowRetrievalQuery ExecudeMultiRowRetrievalQuery In PrefetchPath(2), the virtual Add() method overload is now the one with the extra parameter for excluded fields. Please update your derived class to override the proper method.

Page 34

Adapter specific: IEntityFields2.WriteXml and IEntityField2.WriteXml have been removed and are now internal methods, because the xml writing process has been optimized to use XmlWriter objects instead of an XmlDocument. It's recommended to write Xml from entities and entity collections instead of directly from fields and field objects. This also counts for deserialization of XML to fields or field objects. In dynamic and typed lists with fields from entities which are in a hierarchy of TargetPerEntityHierarchy, the runtime libraries now automatically add the filters for the types the fields are defined in, so you'll get the proper results instead of potentially too many results. This could lead to different results at runtime if you didn't filter properly in your own code. It's likely you already have proper type filters in place as otherwise the results would have been mismatching what you requested anyway. Example: a dynamic/typed list with the field FamilyCar.Brand. This is an inherited field from CompanyCar which is the supertype of FamilyCar. By specifying this field and not CompanyCar.Brand, it signals that the list shouldn't contain rows from types which aren't a subtype of FamilyCar or FamilyCar itself. With the automatic type filters addition, only FamilyCar rows or rows of subtypes of FamilyCar are returned.

Templates
Sqlserver, SelfServicing: the catalog name is now also generated into the stored procedure name for every stored procedure call. This shouldn't have that much impact, though you should be aware of the fact that if you want to call the method in a different catalog, you have to use catalog name overwriting.

Migrating generated code from any previous version to 2.0.0.0 runtime libraries
You have to re-generate the code and recompile the generated code as well as to recompile your own code which references the runtime libraries of LLBLGen Pro (ORM Support Classes and / or DQE). Be sure you reference the new runtime libraries. Your code will likely not compile at this point. Don't panic, we've created a list below with the important changes in the generated code / runtime libraries. The list is also important for code behavior changes at runtime as we made a small set of changes which could affect runtime behavior, as in: null reference exceptions because you access a property in your code which now returns null instead of a guaranteed value. This guide assumes you're on 1.0.2005.1. If you're on an older version, you might run into one or two breaking changes depending on the current LLBLGen Pro version you're using, however in almost all cases these are compile time breakings, very rare and easy to fix. If you're using a lot of custom templates it might be you will run into more problems than which are listed below, which can be solved by updating your templates, which you have to do anyway due to the new template configuration system.

Breaking changes v2.0
General
DataDirect is no longer supported for Oracle. If you're using DataDirect in your project, you can't upgrade to v2.0. We'll provide a migration toolkit after v2.0 has been released to let you migrate your .lgp file to ODP.NET .NET 2.0: if an entity field is nullable, by default it is generated as a Nullable(of T) field. This means that if you read that field's property, you have to read the value into a Nullable(of T) typed variable as well. You can control this through the preferences and project properties by using the preference/project property GenerateNullableFieldsAsNullableTypes . This setting controls every new field's Generate as Nullable type setting. By default all the fields in your project will have that setting set to true, if they're nullable and a value type. To disable nullable fields for some fields, you can disable that for those fields in the entity editor. It's recommended to have all fields which are nullable as Nullable(Of T), and only disable this feature if your current code misuses the default value feature of the entity fields. Please see below as well how to detect that. A plug-in is included in the designer which allows you to set all the generate as nullable type flags of all the entity fields selected to a given value (true/false) so you can migrate your code from 1.0.2005.1 to v2.0 more easily, as having a lot of nullable types can break a lot of code. An entity field's CurrentValue property is now set to null/Nothing by default, not to a default value anymore. Also, when a field is read from the database and the value appears to be NULL, the CurrentValue property is set to null/Nothing. Validator classes are no longer generated by default. You have to enable that task explicitly in the preset you're using. If you're NOT re-generating the validator classes (because for example you're not using them), you should remove the existing validator classes from the existing generated code.

Page 35

PredicateFactory class are no longer generated by default. You have to enable that task explicitly in the preset you're using. If you're NOT re-generating the PredicateFactory class, because you're not using its functionality in your current code, you should remove the file FactoryClasses\PredicateFactory.cs/vb from the existing generated code. SortClauseFactory class are no longer generated by default. You have to enable that task explicitly in the preset you're using. If you're NOT re-generating the SortClauseFactory class, because you're not using its functionality in your current code, you should remove the file FactoryClasses\SortClauseFactory.cs/vb from the existing generated code. IEntityValidator interface has been removed, its functionality is merged with IValidator and implemented in ValidatorBase. All entities now have a single validator for both field and entity validation. Your existing IEntityValidator implementing classes should be converted to classes which implement the new IValidator interface. An entity doesn't get a validator object by default, you either have to set it manually or preferably override CreateValidator and create the validator there. ASP.NET 1.x migration to ASP.NET 2.0: if you had entity collection classes dragged onto your webforms, they're no longer usable. You've to re-do the design time databinding with LLBLGenProDataSource(2) controls. This is due to the fact ASP.NET changed the webform - control usage drastically as well as design time databinding. .NET 2.0: CustomProperties property accessors and FieldCustomProperties accessors now work with Dictionary(of string, string) (object custom properties) and Dictionary(of string, Dictionary(of string, string)) and no longer with Hashtables. .NET 2.0: Collection sorts using ApplySort() isn't supported any more. This is an IBindingList method and its been replaced by EntityView(2) sort methods .NET 2.0: SoapFormatter isn't supported anymore (it can't deal with generics). Collections now always by default have DoNotPerformAddIfPresent set to false. This can lead to duplicates in a collection at runtime as Add() no longer checks by default if an entity is already in the collection. .NET 2.0: Entities no longer expose FieldName Changed events. Instead they expose PropertyChanged, a general event exposed by the INotifyPropertyChanged interface, which is new in .NET 2.0. Code which should notify a control of a changed property, should use the new OnPropertyChanged(propertyName) method. The list name of an entitycollection bound to a grid has been changed to the following: the LLBLGenProEntityName of an instance created by the factory set for the collection + "Collection". So if the factory is set to CustomerEntityFactory, the name is "CustomerEntityCollection". In adapter, this is a change, as in adapter previously the list name was always "". This now allows you to define gridlayouts and bind them to various sets of data at runtime. During databinding, when an edit cycle is in progress (row editing in grid): EntityContentsChanged event of an entity is now raised after EndEdit has been completed. This means that EntityContentsChanged is raised if a new row is been selected. A containing collection won't be notified until this event is raised. The passing around of IValidator objects in selfservicing fetch code has been removed: no IValidator object is passed around anymore. This can break code if code overrides the ExecuteMultiRowRetrievalQuery routine in DaoBase or DataAccessAdapterBase. Entity collection classes no longer set EntityValidator instances and Validator instances on entities added to them. The two properties EntityValidatorToUse and ValidatorToUse have been made obsolete. The warnings resulting this obsoleteness have to be corrected to make the code work again. They've been kept to make forms still be designable in VS.NET. Entities get their validator automatically when they're instantiated, if the proper method is overriden, no longer do they need to get their validator object set by a collection they're contained in. A special method is added to existing Validator classes called OriginalValidate(). In new code it's empty, but with code generated by previous versions of LLBLGen Pro, it will contain the code originally stored inside the Validate() method. Use this special method either from an override of the Validate method from the base class or from a partial class. If the method doesn't contain code you added yourself, you can safely ignore this method and leave it as is. If it does contain code you added previously, you can either copy the code over to the ValidationCode region or call the OriginalValidate() method from an override of ValidateFieldValue() to re-wire the original code to the validation framework. The OriginalValidate() method is added to preserve original code when a user upgrades from an LLBLGen Pro version 1.0.200x.y to v2.0. It can be your code doesn't compile even if you didn't have any validator code in the validation classes before. This is occurs if you use inheritance in your project. If so, do a global replace of the line which breaks the compile to call the base class' ValidateFieldValue() instead of Validate(). When an entity is deleted and a delete restriction filter was specified and no rows were affected by the delete statement, an ORMConcurrencyException is now thrown. This wasn't the case in 1.0.200x.y code. A normal delete without a delete restriction won't throw this exception as no restriction has been

Page 36

specified. .NET 2.0: Collection classes aren't derived from CollectionBase anymore, which means that the methods OnRemoveComplete and OnInsertComplete aren't available anymore for you to override. Use OnEntityRemoved and OnEntityAdded instead. Reading a value from an entity's field property (e.g. myCustomer.CompanyName), and the entity field hasn't been set to a value (which is the case in a new entity where the field hasn't been set to a value), an ORMInvalidFieldReadException is thrown, if the developer has set the static flag EntityBase(2).MakeInvalidFieldReadsFatal to true (default: false). In v1 you could get away with this and use the default value returned, but this isn't allowed anymore because nullable fields lead to different results now and that would otherwise go unnoticed when you upgrade your project, if the exception isn't thrown. Use the flag and the exception to track down code errors after migrating your v1 solution to v2. Example: OrderEntity order = new OrderEntity(); DateTime date = order.OrderDate; // no exception, default // ... // SelfServicing should use EntityBase instead EntityBase2.MakeInvalidFieldReadsFatal = true; OrderEntity order = new OrderEntity(); DateTime date = order.OrderDate; // ORMInvalidFieldReadException, as OrderDate hasn't been initialized. The exception will likely also be thrown in code like this: (selfservicing only): CustomerEntity customer = new CustomerEntity(); OrderCollection orders = customer.Orders; // ORMInvalidFieldReadException This is because the PK of the customer entity is used to produce the query and that field isn't been set. The following code works: CustomerEntity customer = new CustomerEntity(1); OrderCollection orders = customer.Orders; // OK The fetch works, because the PK of customer has been set to a value, so no uninitialized field values are used.

Adapter specific
A collection doesn't have an IValidator object anymore, which means it's not passed around in fetch logic, like in DataAccessAdapter.ExecuteMultiRowRetrievalQuery. Database Specific project: the file PersistenceInfoFactory.cs/vb is no longer used and should be removed. The functionality is replaced by the code in the file PersistenceInfoProvider.cs/vb. Be sure to remove the right file, otherwise your code won't compile.

SelfServicing specific
All Entity field persistence information is shared among each entity instance, in the form of IFieldPersistenceInfo objects. No longer is it possible to target multiple databases in SelfServicing without name overwriting defined in the .config file. .NET 2.0: All generated collection classes derive from EntityCollectionBase(of T), even if the entity type is a subtype. This is because C# and VB.NET don't support covariance: List(of String) isn't a subtype of List(of Object). This thus means that you can't cast a subtype's collection to a supertype's collection. We tried to have as much backwards compatibility as possible so the selfservicing collections aren't generic as in: EntityCollection(of T), but CustomerCollection, or OrderCollection, also because the generated collections have some entity type specific code in them. If ManagerEntity is a subtype of EmployeeEntity, and thus there's an EmployeeCollection, it can't be that ManagerCollection is a subtype of EmployeeCollection, because that would require EmployeeCollection(of T), which would break a lot of existing code. EntityField.Name is no longer writable. Tricks which use this feature are not possible anymore.

Page 37

The file FactoryClasses\PropertyDescriptorFactory.cs/vb is no longer used and should be removed Two class scenario (now called 'TwoClasses'): there's no longer a Full/Safe mode: all derived entity class files are overwritten each time the code is being generated. If you've code placed in these derived classes and it's placed outside user code regions, you should place the code in the user-code regions generated into the derived entity classes. You can switch on the feature to not overwrite existing derived entity classes in the preset of choice: select the task which generates the derived entity classes and in the parameter window, at the bottom specify 'true' for failWhenExistent. This isn't recommended however. It's best to migrate the code to either using usercode regions or partial classes (.NET 2.0). All persistence info now contains the catalog name as it is known in the project (if applicable, like with SqlServer). This is a change from v1.0.200x.y where the catalog name was only generated into the persistence info when 2 or more catalogs were in the project. Because the catalog name is in the persistence info, you can't simply switch catalogs by setting a connection string, if you need to connect to a different catalog at runtime then the one you used to create the project. To be able to do so, you need catalog name overwriting through the config file. Please see Generated code - Application configuration through .config files for more details on this. Note: You can request the version of the runtime libraries you're currently using in your code using:
C# VB.NET

// C# string version = SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Version + "." + SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Build; ' VB.NET Dim version As String = SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Version & "." & _ SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Build The runtime libraries have a file version attribute as well, besides the 2.6.0.0 version. You can request that version attribute's value by clicking with the right-mousebutton on the .dll and select 'Properties' and in the dialog that pops up, select the Version tab. The version format is: 2.6.08.mmdd.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 38

Getting started
Preface
Getting started with a complex system like an O/R mapper seems like a huge task: where to begin? how to do this and that with this system? LLBLGen Pro, although packed with features to make life easier, not harder, can still be a challenge at first: there is so many functionality, what to do best and so on and so on. This Getting Started section guides you in your initial steps towards a succesful usage of LLBLGen Pro.

How to begin? Read the Concepts section!
Firing up the GUI, creating a project, adding some entities, generating some code, it's not that hard to do, even without reading any documentation. The questions will start popping up when you want to use the code you just generated. It is essential when you start your journey into O/R mapping the LLBLGen Pro way, you click open the concepts section at the left and you read the Concepts section in full. Throughout the documentation, terms are used which are discussed in detail in the Concepts section and by having a good understanding of the meaning of these concepts, what the philosophy behind these concepts is and how they relate/interact with eachother in the LLBLGen Pro system, you'll find it easier to get started with the system, its generated code and its functionality.

"I've read the concepts, now what!?"
Ok, so you bit the bullet and read all the concepts? Good. Let's get started! First you have to setup your database schema properly. That is: create foreign key constraints, primary key constraints, unique constraints where appropriate, and normalize the tables as much as possible. If you want to get to know the functionality by using SqlServer's Northwind, that's fine too, as long as you can create a project from a solid database.

Setting up the GUI
First, it is important to set some preferences in the GUI. Open the preferences dialog by selecting File -> Preferences. See the section about the preferences: LLBLGen Pro designer, preferences and Project properties for more details about the various options you can set. The second thing you probably want to setup is the singularization and pluralization in the designer. As not everyone uses the same language, we've refactored out the code to perform these conversions of names into two plug-ins. If you're using English names for the database elements of your database schema, it's wise to setup singularization and pluralization using the shipped plug-ins as these makes life easier for you when it comes to correcting names for entities and constructing names for fields mapped on relations and so on. It's easy to setup singularization and pluralization as described in the section LLBLGen Pro designer, Working with plug-ins - Designer events and plug-ins. After you're done, it's time to create your first project.

Creating a new projects
Ok, let's create a project! To create a project, click on File -> New project (or press Ctrl-N). This will open the create new project dialog, as discussed in the LLBLGen Pro designer, creating a project section. See that section for details about creating a project. Once the project is created you have a fairly empty project explorer and a catalog with one or more schemas in the catalog explorer. The next step is to verify the project properties are correct. Especially the MakeElementNamePascalCasing and the various strip patterns under Name construction can be of great use, as well as the FieldMappedOn* patterns. The project properties are inherited from the preferences set in the designer, and are stored in the project file so other developers in your team using the same project file will use the same project properties.

Page 39

Adding entities
After you've created a new project (or loaded a clean project), it's time to add some entities! Click with your right mouse button on the 'Entities' node in the project explorer and select Add new entities mapped on tables from catalog(s)... option from the context menu (or select the Add new entities mapped on tables from catalog(s)... option from the Project menu). LLBLGen Pro will now determine which tables don't have a counterpart in the form of an entity definition in your project and will enlist all entities not yet added to your project in a dialog box, as shown in the section LLBLGen Pro designer, Adding entities. You can toggle the checkboxes of multiple entities by selecting one or more rows in the grid and by clicking Toggle checkboxes of selected rows. All entities with their checkbox checked are added to the project when you click 'Ok'.

Generating code
"That's it?" Well, to get started, yes. As LLBLGen Pro will determine all relations automatically, your project is already set up correctly with the relations present. Of course you're free to open entity editors of particular entities, rename fields mapped on table fields/relations, add custom properties, hide relations or create new relations if you haven't specified any foreign key constaints in the database. It's perhaps wise to examine the fields mapped on relations as LLBLGen Pro determines all possible relations for you and you probably want to give them proper names. You can do that in the project explorer by clicking with your right mouse button on a field or use the entity editor for that particular entity. After you're satisfied with your project, press F7 or select Project -> Generate from the menu. This will open the generator configuration window. This window is discussed in detail in the LLBLGen Pro designer, generating code section. To get started, the SelfServicing template group is recommended, so select 'SelfServicing' on the first tab for Template group. Also select the right target platform you're going to use in your code, for example .NET 2.0. Then go to the third tab and select the preset you want to use, for example the TwoClasses preset. Select the language of choice (C# or VB.NET), specify the output directory and root namespace and click on Start generator. The output directory is the root of the files generated. It's wise to use a dedicated directory for the generated code.

"Cool, a lot of classes. And now I can query the database?"
Well, yes! To keep things simple, create a new VS.NET solution (or a multi-project project in the IDE you're using) and add the generated code project to that solution. The references to the runtime libraries should be OK. If you're using Oracle, perhaps the ODP.NET assembly Oracle.DataAccess.dll isn't referenced correctly and you should fix the reference. Either way, it's best to check if the VS.NET version you're using indeed loaded the correct assemblies, by clicking the LLBLGen Pro runtime libraries in the references listing and to see from which path they're loaded. Add a new project to the solution, for example a console application. This is your .exe application which will be used to call into the generated code to work with the data. Now, drag the app.config file in the generated code project to the .exe project, which will now be able to use the connection string generated into the app.config file. Add the ORM Support classes runtime library to the references of your .exe project: for VS.NET 2002, use the .NET 1.0 version, for VS.NET 2003 use the .NET 1.1, for VS.NET 2005 and VS.NET 2008 use the .NET 2.0 version. For CF.NET 1.0, use the CF10 version, for CF.NET 2.0 use the CF20 version and for CF.NET 3.5 use the CF35 version. Be sure you don't accidently reference the CF.NET version of the runtime library. For Linq usage, be sure the reference to the LinqSupportClasses is present in your project, as described in Compiling your code. Of course, also reference the generated code project in your .exe project. Time to build the solution to see if everything is ok! This will build the generated code and the (still empty) .exe project. If your LLBLGen Pro project is very large (say, over 500 entities), compiling from VS.NET can be cumbersome (slow). It's recommended to use a build tool like Nant, MSBuild or command line tools to build the generated code in a separate solution. If the generated code builds OK (in rare cases it can fail due to entity names conflicting with known classes. In this case: change the entity name in the GUI and regenerate the code), you're ready to get some data manipulation going!. It's recommended that you start with the How Do I? section in the Tutorials documentation section: Tutorials and Examples, how do I ... ?. That section contains a lot of easy and advanced topics you will run into by using the generated code. Using the How Do I? section you can start without having to dig through the documentation about using entity classes and the like. Always keep in mind that the generated code is

Page 40

divided into namespaces, so you probably need to specify using/Imports statements at the top your file. Also it's wise to do the tutorials listed in the Tutorials section. After you get bored with the How Do I? and tutorials sections, you can start learning about the usage of the entity classes in code, by starting with LLBLGen Pro, using the generated code. and then selecting the section of the template group you chose when you specified the generator configuration. Also a good start are the example applications available on the LLBLGen Pro website.

Linq to LLBLGen Pro usage
If you want to use Linq to LLBLGen Pro in your project to query entities instead of the native LLBLGen Pro query API with predicates and the like, you could start right away with the Linq to LLBLGen Pro section. However you're adviced to also take a look at the Filtering and Sorting section of the template group you've chosen (Adapter, SelfServicing). This to give you a better picture what happens behind the scenes with a Linq query, and also gives you information what to do in the cases when a Linq query doesn't cut it (e.g. in edge cases with complex joins) and you have to fall back on the LLBLGen Pro query API.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 41

Compact Framework / Sql CE support
Preface
Since v1.0.2005.1, LLBLGen Pro supports the Microsoft Compact Framework.NET (CF.NET) and SqlServer CE on PocketPC and Windows CE. This section discusses the limitations of the support LLBLGen Pro currently offers for CF.NET. LLBLGen Pro supports the CF.NET 1.0 framework and CF.NET 2.0 framework as well as SqlServer CE 2.0 or higher and also SqlServer CE Desktop v3.1 on .NET 2.0 or higher. CF.NET 1.0 development requires Visual Studio.NET 2003, CF.NET 2.0 development requires Visual Studio.NET 2005, CF.NET 3.5 development requires Visual Studio.NET 2008. This section is about SqlServer CE support for the compact framework. Please see Database specific features for details about SqlServer CE Desktop.

Requirements.
LLBLGen Pro v2.6 For CF.NET 1.0, OpenNETCF installed (v1.4: http://www.opennetcf.org) For CF.NET 2.0, you need CF.NET 2.0 SP2 as well.

Supported functionality.
CF.NET v1.0, v2.0, v3.5 Pocket PC / Windows Mobile C# and VB.NET SelfServicing and Adapter. Adapter is recommended and a developer should set KeepConnectionOpen to true. SqlServer CE v3.0 or higher. You can use SqlServer CE 2.0 but it's highly recommended you use the latest SqlServer CE database version. SqlServer projects only, when it comes to data-access. You are able to generate DatabaseGeneric code in adapter projects for other databases.

Compiling your code / using the code
VS.NET projects are generated automatically. Please select the required CF.NET version in the platform combo box in the generator configuration dialog. The Runtime libraries are: For CF.NET 1.0: SD.LLBLGen.Pro.ORMSupportClasses.CF11.dll SD.LLBLGen.Pro.DQE.SqlServerCE.CF11.dll For CF.NET 2.0: SD.LLBLGen.Pro.ORMSupportClasses.CF20.dll SD.LLBLGen.Pro.DQE.SqlServerCE.CF20.dll For CF.NET 3.5: SD.LLBLGen.Pro.ORMSupportClasses.CF35.dll SD.LLBLGen.Pro.DQE.SqlServerCE.CF35.dll You have to re-generate your code if you have an existing project, you can't recompile your existing code for CF.NET automatically. To start with SqlServerCE, first create the project on a normal SqlServer 7/2000/2005 database, then generate code using that project and by selecting CF.NET 1.0, CF.NET 2.0 or CF.NET 3.5 in the platform combo box.

Page 42

LLBLGen Pro functionality not available
Remoting, serialization to/from SOAP/Binary formatter COM+, enterprise services functionality. Design time databinding Stored procedure calls Delegate support for stored procedure calls in unit of work DELETE FROM FROM, UPDATE ... FROM ... Paging The generated queries can target only one catalog. The catalogs can have just one schema, so objects in the SqlserverCE database are targeted as if there is just one catalog and no other schema. Schema and catalog name overwriting is not supported: you can specify it, but it's not used. Saving a transaction to a savepoint is not supported. Named transactions are not supported, the name is ignored aggregatefunction(DISTINCT ..) versions are not supported, just the non distinct versions STDEV and VAR aggregate functions ARITHABORT isn't supported, as SqlServerCE doesn't support indexed views. SqlServer type SmallMoney is converted to Money in parameters. SqlServer type Char is converted to NChar in parameters. SqlServer type VarChar is converted to NVarChar in parameters. SqlServer type Text is converted to NText in parameters. App.Config reading. Users should supply the connection string to either the Adapter constructor or to DbUtils.ActualConnectionString. Type converters Linq

Notes for this release
For CF.NET 1.0, be sure to reference OpenCFNet 1.4 or higher to be able to compile the generated code. Conditional areas are added, to exclude code for CF builds. CF builds use a compiler directive, CF, which will exclude some areas or include some areas in the code for CF builds only. Adapter is recommended, as adapter offers the ability to keep open the connection and SqlServer CE doesn't allow multiple connections. SelfServicing will open/close the connection multiple times in a hierarchical fetch and is therefore less efficient. Multi-threaded applications which execute queries in parallel are not supported, only with a shared adapter object, which has to be used with care (i.e. has to be surrounded with a lock statement. As there is just one catalog supported per project, the catalog to connect to is determined by the connection string. This means that the catalog specified in the connection string is also the catalog the code will access.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 43

Concepts
Preface
LLBLGen Pro is a system that helps you build solid n-tier applications fast and easy. To be able to do that, you should familiarize yourself with some concepts forming the foundation of the system, and the source code produced by the system. In the LLBLGen Pro system you first define definitions for Entities, Typed Lists, Typed Views and, if you want to include existing stored procedures, Stored Procedure calls. When you're done designing, you will use these definitions to produce code. It is thus fundamental that you understand what these definitions stand for and how they are used in the process. This section, the concepts section, guides you through a set of concepts and their meanings. This way you will understand why certain things in the LLBLGen Pro system are designed the way they are and how you can utilize the full power of them.

The concepts
Entities, typed lists and typed views. These elements are the building blocks of the code you are designing with the LLBLGen Pro system. The designer of the LLBLGen Pro system lets you design these elements before they are used to generate code. O/R Mapping. O/R Mapping is short for Object-Relational Mapping, and is the general term for the concept of creating mappings between tables or views and their fields and Object Oriented (OO) classes and their fields, to be able to represent a table row / view row in an OO program via a class. LLBLGen Pro uses the O/R Mapping technique in the code generated. O/R Mapper frameworks are also called 'Object persistence frameworks'. Entity inheritance and relational models. Using supertype/subtype constructions in abstract relational models offer you the ability to use inheritance in LLBLGen Pro as well. This section describes the relation between inheritance constructs of entity hierarchies, how they're typically constructed in schemas and how inheritance in entity hierarchies differs from plain OO code inheritance. Stateless persistence. Statefull and stateless are terms which have a lot of different, often contradicting, definitions. To understand what stateless and stateless persistence mean in the LLBLGen Pro system, this section describes these terms and why the system utilizes this variant and not another one. Task based code generation. LLBLGen Pro uses a template consuming, task based code generator framework. This system is very powerful and flexible, but because of that it can also lead to confusion and questions about where to alter what to get the results you want. This document explains the system and its elements and how you can utilize it in full to extend it to meet your needs. Templates and Template Groups. The core code emitter task performer (see Task based code generation) shipped with LLBLGen Pro uses templates to generate the code for you. These templates are semantically groupd in Template Groups which consist of a variety of templates for all the classes that will form the framework that will be the result of a code generation action. This section explains how they work and what their place is in the system. Database drivers. LLBLGen Pro uses database drivers to connect to different types of databases in order to retrieve the schema's which will be the target of the code to generate. This section explains the role of the database drivers in the LLBLGen Pro system. Dynamic SQL. LLBLGen Pro doesn't use stored procedures for data-access. Instead, it uses a Dynamic Query Engine (DQE) which generates SQL on the fly, parameterized, to anticipate dynamically on the current situation. This document explains what the advantages are of Dynamic SQL and how it is embedded in the LLBLGen Pro system. Type Converters. This section describes in brief the Type Converter technology build into LLBLGen Pro With this technology you can specify different .NET types for your entity/typed view fields than which are compatible with the table/view field the entity/typed view field is mapped on. An example of this of to map a System.Boolean entity field onto a numeric database field. Dependency Injection and Inversion of Control. This section discusses the Inversion of Control (IoC) design of the LLBLGen Pro framework and briefly introduces the Dependency Injection mechanism build into LLBLGen Pro.

Page 44

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 45

Concepts - Entities, Typed lists and Typed views
Preface
Entities, typed lists and typed views are the elements forming the building blocks of the functionality you design with the LLBLGen Pro system. The designer of the LLBLGen Pro system lets you design these elements before they are used to generate code. Below you'll find a short description per element of what the element stands for, how it is used in the LLBLGen Pro designer and in which context you should place the element when using the generated code. The text below can be a little theoretic at first, but don't let that stop you reading it. Each subsection has, when applicable, a summary which explains in short what you need to know, which is enough to get started with the designer later on. When available, documents on the internet are linked to help you further understand what the background is of certain theories used. If some theory is not completely clear to you, you are advised to download the documents linked at the bottom of this page or visit the sites mentioned. It is however not necessary to become an expert in E/R model, NIAM/ORM (Object Role Modelling) or other modelling technique to work with LLBLGen Pro. On the contrary: when you understand the basic concepts behind these modelling techniques and theory, you are fully equipped to start designing your first LLBLGen Pro project!

Entities
LLBLGen Pro's core element to work with is the Entity. The term 'Entity' has a lot of, often contradicting, definitions: Martin Fowler defined the entity differently than the people who defined it decades ago: Peter Chen and Edward Yourdon. LLBLGen Pro uses the 'ancient' definition of an 'entity', defined by Dr. Peter Chen in relation to his Entity Relationship Model [1]: "Entity and Entity set. Let e denote an entity which exists in our minds. Entities are classified into different entity sets such as EMPLOYEE, PROJECT, and DEPARTMENT. There is a predicate associated with each entity set to test whether an entity belongs to it. For example, if we know an entity is in the entity set EMPLOYEE, then we know that it has the properties common to the other entities in the entity set EMPLOYEE. Among these properties is the afore- mentioned test predicate." Typically, entities are realized in a database schema as a table or view: the entity 'Customer' is realized as the 'Customer' table, 'Order' as the 'Order' table etc. There are exceptions to this rule: An m:n relation is often realized using an 'intermediate' table. When an entity supertype/subtype hierarchy is defined, an entity in a relational model can be physically realized with more than one table/view. This is further discussed in the section Entity inheritance and relational models So unless inheritance is used to create a hierarchy of entity types, an entity has a 1:1 relation with its physical counterpart in the database model, be it a table or a view. In normal terms, this is also known as 'entity is mapped onto a table or view' and at least mapped onto one table/view. The concept of an entity is therefore a definition for a relational model element and has been used by database designers, DBA's and software engineers all over the world for many years. It is then logical to adapt that concept of an entity for the Entity element used in the LLBLGen Pro system. LLBLGen Pro uses a table or view definition as the base definition for an entity, also when supertype/subtypes are used in an inheritance scenario: an entity is always at least mapped onto one table or view. Also, tables which are used for the construction of m:n relations (the aforementioned 'intermediate tables') are seen as separate entities. The reason for this is that the intermediate table's definition often is 'objectified'[2], which means that the table fields (attributes) used to form the m:n relation are seen as separate attributes which can hold a new set of fields (attributes).

Page 46

An example of this is when an m:n relation has been defined between the entities Department and Employee via the intermediate table DepartmentEmployees. This DepartmentEmployees table has two fields: DepartmentID and EmployeeID. Now we can use this m:n relation to add to the model specific fields, which relate to the m:n relation. For example: startDate. This date holds the date an employee started to work for a department. This field will be added to the DepartmentEmployees table. When that is the case, the relationship between Department and Employee has been objectified into a new entity: DepartmentEmployees. To give better insight in this, LLBLGen Pro always sees every table as an entity so you will see every relation as a possible objectified relation. Also, it's often the case that m:n relationships are defined between two entities using a third entity with its own set of fields (attributes). An example of this is the m:n relation between Order and Product via OrderRow.

Entity Fields (Attributes)
The attributes, or fields, of an entity are already mentioned above. An example of entity attributes are: CustomerID, CompanyName, or EmployeeID. These attributes together form the entity definition's data fields and also the table's column set. In LLBLGen Pro you'll recognize them as fields mapped on database fields. So when for example a table called 'Customer' is defined as the map target of an Entity definition in LLBLGen Pro (which is a 1:1 relationship; they represent the same semantic element), the fields of that table are available as Entity fields in LLBLGen Pro, and each entity field has a 1:1 relation with its related table or view field.

Entity relations
Entities can have relations with other entities, be it a 1:1, 1:n (one to many), m:1 (many to one) or m:n (many to many) relation. These relations are defined automatically by LLBLGen Pro and can be used to traverse from one entity to another. To make that possible, LLBLGen Pro defines fields mapped on relations. An example for this is the field Orders in a Customer entity which represents an entity set of Order entities related to the Customer. Summary Entities are a common concept defined by Dr. Peter Chen in 1976, and are used in both the database world as in the software development world. Entity definitions are physically represented by table or view definitions, an entity itself is represented by a table or view row (record) and an entity set is represented by a set of table or view rows (record set). Entities in LLBLGen Pro are 1:1 representations of tables or views found in the catalog(s)/schema(s) of the project loaded and target 1 table or view only. Table or view fields are represented in LLBLGen Pro entities as entity fields mapped on database fields. These entity fields have a 1:1 relation with their table /view field equivalents. Relations between entities are automatically defined by LLBLGen Pro and are used to define extra fields in the entities to traverse from one entity to another, using the information of the starting entity. For example, Customer.Orders contains all Order entities related to the customer holding the Orders field, which is mapped on the 1:n relation between Customer and Order. Exactly how entities and their relationships look like in generated code depends on the selected template group and preset. See for more details: Generating code, Task based code generation and Templates and Template groups. The designer works with the definition of an entity and you as a user do too. You shouldn't think in terms of classes and objects yet, an entity definition can result in a lot of classes (entity base class, entity class, entity modification asp.net form class etc.), however they're all based on the same entity definition.

Typed lists
To make life easier when working with entities, LLBLGen Pro defines the concept of a Typed List. A Typed List is a list of entity fields from a set of entities which have 1:1, 1:n or m:1 relations which each other. This way you can create custom, read-only views in LLBLGen Pro on the whole set of entity definitions in your project, which can span more than one entity and do not have to contain all fields of all participating entities. Typed Lists are a powerful, easy way to define read-only lists building on existing entity definitions and their relations. LLBLGen Pro includes a powerful editor for defining typed lists within seconds which are guaranteed to be based on an existing set of relationships between entities participating in the Typed List. All entity fields in a Typed List are fields mapped on database fields. Fields mapped on relations can't be used as Typed List fields. All Typed Lists are generated as Typed DataTable classes. Summary A Typed List is a collection of entity fields from a set of entities which have one or more relations with each other of the type: 1:1, 1:n or m:1. A Typed List is a read-only view on the entities in your database.

Page 47

Typed Views
LLBLGen Pro can create Typed View definitions which map in a 1:1 way on existing views in the catalog(s)/schema(s). Typed Views can come in handy when you've defined special lists using database view technology, and you want to use them in a typed manner in your .NET project but in a read-only way. LLBLGen Pro offers you the ability to define a Typed View equivalent of a database view, which can then be used in your .NET code in a read-only way. A Typed View is, like the Typed List, generated as a Typed DataTable class, so you can use it in your .NET code in a typed way, which increases code stability and development speed. An entity mapped onto a view can also be used to retrieve view data, as well as a Typed View created from the same database view. If you're using the view data in a read-only fashion, it is recommended to use the Typed View object, as it contains less overhead for bulk fetches, and can be somewhat faster in bulk-fetch scenarios. Views can also be seen as entities (as discussed above). And therefore it can be useful in some scenario's to map an entity onto a view as well. LLBLGen Pro lets you map a TypedView and an entity (or multiple entities) onto an existing database view at the same time. If you want to use a view in joins or filters, it's key you map an entity onto the view so you can define custom relations inside the LLBLGen Pro Designer.

Stored procedure calls
To re-use existing code, defined in stored procedures in the target catalog(s)/schema(s), in your .NET code, LLBLGen Pro can create Stored procedure call definitions. These definitions define a call to a given stored procedure, be it a retrieval procedure (i.e. returns a resultset) or an action procedure (does not define a resultset), which will be generated as an OO-method in the .NET code. This way you can keep your investment in stored procedure code, and at the same time utilize the code in a typed, easy way.
Notes [1] Dr. Peter Chen , "The entity-relation ship model - toward a unified view of data", ACM Transactions on database systems vol.1 no.1, march 1976, page 9-36. Download. [2] Prof. T.A. Halpin, Object Role Modelling, an overview, page 10 and further. Download.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 48

Concepts - O/R Mapping
Preface
O/R Mapping, or Object Relational Mapping, is the general term for the concept of creating mappings between tables / views and their fields, and OO classes and their fields to be able to represent a table or view row in an OO program via a class. O/R Mapping is done using entities and their attributes (which are physically available through table or view definitions) and by creating a class for each entity, mapping each field in that entity class onto a field in a table or view. The management logic necessary to read an entity's data from the database into an instance of its entity class and back, together with the entity definitions, is called an O/R Mapper framework. Via this framework a developer can manipulate data in the database, using classes and their methods.

LLBLGen Pro's O/R Mapping
O/R Mapping is not new, for example in the Java world it is a common technique to work with data in a database. For developers using Microsoft platforms and .NET in particular, O/R Mapping is rather new and often mis-understood. Most Microsoft oriented developers think in tiers, tables and raw SQL statements and look for code generators for these tiers, completely neglecting the term O/R Mapping and the tool utilizing this technique. As with all techniques used by developers on a variety of platforms, O/R Mapping has a list of definitions which are not always the same. To be clear how O/R Mapping is utilized in the LLBLGen Pro system, this section defines O/R Mapping for LLBLGen Pro. It by no means claims to be the 'correct' version of the definition of an O/R Mapping framework, but it is a usable definition for LLBLGen Pro users.

Mapping with entities
Because LLBLGen Pro uses the relational model oriented definition of an entity, mapping of classes on tables / views is simply done by creating an entity definition for every table / view definition selected by the user, and an entity field definition for every table / view field in these selected tables / views. This concept is extended by offering the ability to specify that an entity is a subtype and/or a supertype, which results in an entity being mapped onto more than one table / view (its own table / view and all the tables / views its supertype (and that entity's supertype etc.) are mapped on). For more information about this concept which is called 'Inheritance', please see Entity inheritance and relational models. LLBLGen Pro allows you to map more than one entity definition onto the same table / view, and / or partly map an entity definition onto its table / view. Entity classes generated using these entity definitions contain all logic necessary to read an entity's data from the database (persistent storage) or send back modified data to the persistent storage. You can see the instance of an entity class as the in-memory representation of a table / view row's data, and the methods exposed by the instance of the class as the tools to work with these data. Derived entities, or subtypes, are generated as derived classes and simply inherit the supertype's fields, logic and methods, overriding properties/methods where appropriate. A derived entity type therefore contains the supertype's fields together with its own fields. See for more details Entity inheritance and relational models. LLBLGen Pro doesn't use an in-memory object cache, which means that the generated code and the runtime library targeted by the generated code, are designed to work with entities directly in the persistent storage (database). In-memory equivalents of data in the persistent storage (i.e. the entity objects you instantiate when you instantiate an entity class and fill it with its data from the persistent storage), will be kept in sync, if possible. However, code using the entity classes shouldn't rely on that, because .NET doesn't have a way to know when a database row has changed and thus when an entity object in-memory should be re-synced with the database. LLBLGen Pro defines a context which can be used to create unique objects per entity row in the database. Often it is not necessary to create unique objects per entity data however in the situations you need it, the context class is a way to make sure an entity is loaded in just one entity object for that particular context. See Using the context for more detailed information about how to use this in your code.

Page 49

The entity classes generated give you the ability to work with data in the persistent storage in a typed, friendly way, however they don't contain logic often seen in Java O/R mappers, like an application wide object cache. The reason for this is that .NET doesn't have a system like Enterprise Java Beans (EJB) which does the persisting for you and keeps track of any changes, even in a distributed application spanning more than one machine. This is not a limitation however, since working with entities as they are defined by LLBLGen Pro is a matter of data manipulation in the persistent storage and to do that with close ties to that persistent storage, multiple users, using multiple application or threads, targeting the same database, can work with the same data without problems, because the data manipulations are designed to be targeted directly to the persistent storage. Read more about this topic in Stateless persistence.

An entity's lifecycle
To get more familiar with the terms, the way LLBLGen Pro handles things and the way terms relate to each other, the following list is defined to illustrate when what is done and why. As an example, the entity definition 'Customer' is used which targets the table 'Customer'. A new 'customer' is added, with the primary key 'SOLDES', which is the CustomerID. For simplicity, SelfServicing is used to illustrate the lifecycle. For Adapter, it is similar, the developer uses different methods/classes to achieve the same goal(s). Entity class instantiation. This action is the start of a new entity's life. You will create a new instance of a given entity's class, in this case the CustomerEntity class as we're discussing the lifecycle of a customer entity. The object you will have after instantiation is an empty object, specialized for Customer entities. This can be as simple as this statement: CustomerEntity customer = new CustomerEntity(); // (C#) or Dim customer As New CustomerEntity() ' (VB.NET) The object 'customer' is now the instantiation of a new Customer Entity class, but it doesn't have any values yet, and it is only known in memory, not in the persistent storage. Other users or threads in the application can't see the object. Filling the Entity object with data. This action fills the entity object with its data. In this case, you will specify values for the 'customer' object's properties, for example it's CustomerID (if that isn't generated by the database) and CompanyName. When the properties are filled, the object no longer is an empty bucket, but is the holder of the data of a new entity: the entity which represents the customer who's data is stored inside the object. Persisting the Entity. Persisting means that the entity's data, which is held in memory by an instance of an entity class, is written back to the persistent storage. When the entity is new, like our customer in this example, persisting means that a new row will be added to the table (or tables, through the view the entity is mapped on if the RDBMS supports updatable views) which definition represents the entity definition (i.o.w. the target table / view of the entity definition, in this case the Customer entity definition, which targets the Customers table). When an entity is not new, its row in the persistent storage is updated with changed information held by the object in memory. Persisting entities in the LLBLGen Pro generated code simply requires calling the entity object's Save() method. The persistence logic will figure out if it has to insert the entity data as a new entity or update its row in the persistent storage (database). Once an entity is persisted to the persistent storage, it is available in the database, and thus can and will be seen by other threads and users targeting the same database. (Transactional, database specific functionality can keep an entity hidden until a transaction is finished, however for the example, we assume no transaction is issued, just a single entity is persisted) Modify an entity field. After the entity is persisted, other users can read the entity's data via other threads, by instantiating the entity back into memory, their memory. This is for example done on another machine. To read back the entity from the database into memory, a developer can use the following statement: CustomerEntity customer = new CustomerEntity("SOLDES"); // (C#) or Dim customer As New CustomerEntity("SOLDES") ' (VB.NET)

Page 50

When these statements are completed, the Customer Entity represented by the CustomerID (which is the primary key) with the value "SOLDES" is read into memory in a new instance of the Customer Entity class, and that instance is called 'customer'. The developer can then change one or more of this entity's values by altering the values of 'customer's properties. Also, the developer can traverse relations the Customer Entity has with other entities, for example ask for a collection of all customer's orders by simply referring to the 'Orders' property this entity object might have, because the entity definition has a relation with the Order entity. The developer changes the value for CompanyName into "Solutions Design" and persists the entity back to the persistent storage by using these statements: customer.CompanyName = "Solutions Design"; // (C#) customer.Save(); or customer.CompanyName = "Solutions Design" ' (VB.NET) customer.Save() After the Save() method has been called, the entity's new data is available in the database and can be seen and used by other users and threads. Deleting an entity. Everything ends, and so is the relationship with this customer. The developer wants to remove the entity completely from the system. This is simply done by calling the Delete() method exposed by the customer object. When the Delete() method is called, and the deletion was successful, the entity itself is removed from the persistent storage, and the entity's data holding object, the customer object which Delete() method was called, is cleared. Now the entity is not available anymore to others. Also, using the in-memory object after a Delete() statement will result in an exception. The only thing left is letting the 'customer' object go out of scope so the .NET Garbage Collector can clean up the bits that once formed the object holding the SOLDES entity.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 51

Concepts - Entity inheritance and relational models
Preface
This section describes the phenomenon called 'entity inheritance' and its connection with relational models and the physical data model. It presents two typical entity scenarios and offers insight in how these scenarios are represented in a physical model. Mapping an entity hierarchy in LLBLGen Pro is discussed by describing the most common ways of mapping entity hierarchies onto tables / views and which of these are supported by LLBLGen Pro. The approach of this section is based on: "why would you use it, and if you want to use it, how would you implement it", to make it more understandable why entity inheritance can help you with your project and also for which inheritance can be helpful and thus for which situations you also might want to consider another approach.

Supertypes / subtypes and NIAM terminology
In relational models, entities can derive from another entity, for example to specialize the definition of that other entity. Typically the derived entity is called a subtype and the entity derived from is called a supertype. LLBLGen Pro uses this same terminology. An entity which has subtypes, is called a supertype, and an entity which is a derived entity is called a subtype. In the following sections, two hierarchy types are described which are typical for many situations. The hierarchy types are illustrated with a (simplified) NIAM / ORM (object role modelling: http://www.orm.net). To help the readers who aren't familiar with NIAM and / or ORM, the following brief description should help understand the presented diagrams. NIAM/ORM diagrams are based on sentences, which are readable in the diagram, and which are called 'facts'. Entities are represented by oval objects with a solid border. Entity attributes (fields) are represented by oval objects with a dashed border. Relations between entities or entities and attributes are represented by an '--[ | ]--' object, as illustrated in the following examples. A supertype/subtype hierarchy is represented by an arrow, starting at the subtype, pointing to the supertype.

One to many relation, which is mandatory on both elements

One to one relation, between an entity (mandatory) and an attribute

Hierarchy creation for proper entity relation modelling
First an example of a hierarchy which is constructed to utilize proper entity relation modelling:

Page 52

This model is a simplified NIAM/ORM model, which is a bit larger than shown here (the other part is described in the next sub-section), which illustrates three entity types in an inheritance hierarchy( Employee, Manager and BoardMember) and different relations per entity. The common attributes for all entities in the hierarchy are defined with relations to Employee, which is the root of the hierarchy. The diagram further shows a relation between Employee with Department which illustrates the semantic "Employee works for Department" relation. Manager and BoardMember also have relations, which are specific for their type: Manager has also a relation with Department, but for the semantic relation "Manager manages Department". BoardMember has a relation with CompanyCar, to illustrate the semantic relation "BoardMember has a CompanyCar". These relations are with other entities. To use the last mentioned relation, BoardMember has a CompanyCar, as an example: this relation is defined on the entity "BoardMember" because only BoardMembers are allowed to have a company car, in this situation. Would the relation be placed on Employee, every employee would, in theory, be able to have a related CompanyCar entity, and thus a company car. There are two main ways to construct a physical representation in tables from a hierarchy like this example: 1. One table, which contains all fields/attributes of all entities in the hierarchy. 2. Every entity its own table, which contain only the fields of the particular entity (Of course, for 'table' you can say 'view') The first way, which is called in LLBLGen Pro TargetPerEntityHierarchy and which is discussed more in the next subsection, will define the aforementioned relation "BoardMember has a CompanyCar" on the same table / view as where normal employees are stored. This can be a problem, as it's then up to program logic to limit the insertion of employee data so a normal employee can't have a company car. A better approach in this situation, is the second option, where for each entity in the hierarchy a table/view is created and on the BoardMember table the relation to CompanyCar is defined. This second way is called TargetPerEntity in LLBLGen Pro.

Physical representation in the data model and LLBLGen Pro entities

Page 53

The typical hierarchy mentioned above is realized in your datamodel with one table / view per entity. This means that for the hierarchy above you'll get an Employee table / view, a Manager table / view and a BoardMember table / view. The Employee table is the leading table, where you define the primary key for the entity to uniquely identify an entity instance in the database. As ID is a perfect candidate for this (1:1 relation between entity Employee and attribute) this will become the primary key. Manager and BoardMember get this same field, to identity the rows stored in these tables, though they're not new PK values, but have a foreign key constraint defined to the ID of the supertype, so Manager.ID has a foreign key constraint to Employee.ID and BoardMember has a foreign key constraint to Manager.ID. This way, referential integrity rules in the database make sure your data stored in the database is correct, also for derived entities. Often O/R mappers require a discriminator column in the root table/view so they can determine what the type is of the data the root table/view is holding. In LLBLGen Pro you don't need to define a discriminator column on the root table, LLBLGen Pro can find out by itself what the type is of the data stored in the tables / views which make up a hierarchy.

Note: Don't use surrogate keys on the subtype tables, it's important the PK of the subtype tables has the foreign key to the supertype's PK. Entities In LLBLGen Pro you can define the same hierarchy as defined above, by simply making Manager a subtype of Employee and BoardMember a subtype of Manager. You can also let LLBLGen Pro try to find these hierarchies for you, by selecting the 'Construct Target-per-entity hierarchies' option in the Entities context menu. When you save a new entity which is a subtype and which is represented in the database by multiple tables, like for example the BoardMember entity, the entity will get a record in all the tables of the hierarchy: Employee, Manager and BoardMember. Updating such an entity will update its rows in the tables where changed fields are located. So if you change BoardMember.Name, an update statement will be issued on the Employee table. Fetching a BoardMember will cause LLBLGen Pro to use INNER JOINs between Employee, Manager and BoardMember. It's important to note that you can't re-use a record in a supertype table for a different subtype instance. For example if you store 'shared' information in the Employee table record, you can't share that record with different Manager instances for example. See also Limitations and Pitfalls later in this section.

Hierarchy creation for proper entity field availability modelling
Following is an example of a hierarchy which is constructed to have different field availability in the different entities in the hierarchy:

Page 54

This model is part of the bigger NIAM/ORM model presented in the previous subsection. At first it looks like the same type of hierarchy as the previous example however there's a small, but important, difference: the relations presented in this example are between an entity and an attribute. This means that this hierarchy specifies the specialisation of an entity for the purpose of adding different fields to the entity, without polluting the supertype with these fields. In this example, the supertype CompanyCar, which has a relation with BoardMember, is specialized with a DrawingHook attribute in the FamilyCar entity. The SportsCar entity has an extra attribute as well: a Cabrio attribute. Would you not have inheritance, you had to add these attributes to the CompanyCar entity and set them to NULL accordingly. This is a bit inconvenient, because a sportscar obviously never has a drawinghook (ok, some people are that crazy). In the two main ways to construct a physical representation in tables, you can opt for both ways for this hierarchy, however it's more efficient to use the first hierarchy, because you work on a single table and you don't need foreign keys to preserve referential integrity with related entities defined on subtypes: the relations with subtypes are defined in the root of the hierarchy, CompanyCar.

Physical representation in the data model and LLBLGen Pro entities
The typical hierarchy mentioned above is realized by simply flattening the hierarchy and store all attributes of all the entities in the hierarchy in a single table. Every attribute of the root entity is defined as mandatory (not nullable), while every attribute of a subtype is defined nullable. To be able to determine what the entity type is of a given row in the table / view, a discriminator column is used with a value which represents the type. This column can be of any type, as long as it ends up as a System.Byte/Int16/Int32/Int64/Guid/Decimal or System.String type in LLBLGen Pro. An example for the discriminator column for the hierarchy in this subsection could be the column called 'CarType' of type int, which for example contains '1' for CompanyCar instances, '2' for FamilyCar instances and '3' for SportsCar instances. Entities In LLBLGen Pro you can define the same hierarchy by creating subtypes of the CompanyCar entity (or subtypes of subtypes in that hierarchy). As soon as an entity which isn't in a hierarchy, is made root of a hierarchy of type TargetPerEntityHierarchy, all fields which are nullable and not part of a relation are unmapped in the root entity, and can be mapped manually in subtypes. You can of course remap them in the root entity if you want / need to, however typically nullable fields in the root entity are used for fields in subtypes so to save you some work, LLBLGen Pro unmaps them for you first. A dialog helps you defining subtypes and specifying discriminator columns. New entities which are in a TargetPerEntityHierarchy hierarchy don't have to get their Discriminator field set to a value, this is done for you by LLBLGen Pro.

Comparison of inheritance mapping with competing O/R mappers
Inheritance is an important part of most modern O/R mapper frameworks. Because O/R mapping is a generic technique with a wide variety of detailed descriptions, it can be confusing what each mapping strategy means and how it compares to another O/R mapper's way of mapping inheritance. There are in general 4 ways to map entity / class hierarchies (inheritance hierarchies) onto a set of tables / views: 1. Complete hierarchy mapped onto a single table. In LLBLGen Pro this is called TargetPerEntityHierarchy. This is the inheritance type which is very easy to implement and therefore supported by most O/R mappers. 2. Every type in a hierarchy mapped onto its own table, no discriminator column. In LLBLGen Pro this is called TargetPerEntity. This inheritance type is hard to implement, most O/R mappers fall back to option 3. 3. Every type in a hierarchy mapped onto its own table, with discriminator column in root type. Similar to option 2, and because LLBLGen Pro doesn't need a discriminator column for option 2, this type is supported but the discriminator column is ignored / not required. 4. Every type in a hierarchy mapped onto its own table, where all fields from the supertype are present in each entity's table. The O/R mappers which do support this mapping call it Target per concrete entity. As it has severe limitations, and is not efficient (both for saves/updates and for data storage) this setup is not supported by LLBLGen Pro.

Abstract entities
It can be that you've defined a hierarchy, for example the Employee, Manager, BoardMember hierarchy, and you don't want the developers of your team to use every type in that hierarchy, for example because one or

Page 55

more types in the hierarchy, like Employee, is defined just to define attributes for the rest of the hierarchy, not to be a real entity. LLBLGen Pro lets you specify if an entity is abstract or not, you do that in the entity editor. Entities which have subtypes and which don't have a supertype which isn't abstract, can be made abstract. An abstract entity means that you can have instances of that entity in multi-entity polymorphic fetches but you can't instantiate an entity instance in code by using its constructor.

Limitations and pitfalls
Entity inheritance is a powerful instrument and can bring you a lot of advantages when working with plain relational data. It also comes with a warning label, as it does require you to think through your physical datamodel before proceeding, and can have performance limitations. LLBLGen Pro's inheritance implementation, while powerful, also has some limitations which most likely won't affect you in your work, but are worth mentioning.

Pitfalls with inheritance
Don't share data in records belonging to supertypes with different subtype instances. This is already mentioned, but it's important to mention it here again in this list. A multi-table entity is owner of the records saved for that entity instance in all the tables the entity is mapped to. Deep hierarchies of TargetPerEntity can cause performance problems. When you create deep hierarchies of type TargetPerEntity, so the leafs in your hierarchy are mapped onto a lot of tables, you have to be aware of the fact that when you fetch entities of that hierarchy, the runtime libraries will create INNER JOINs to be able to pull derived entity instances as well. This can lead to some performance degrease when your tables are very large (large amount of records) Don't map instance variations as hierarchies. While this is a grey area, it's important to note that if you have variations in instances based on semantical interpretation of the data and there aren't likely to be very many instances of a given 'type', it might be better to not create these subtypes. An example can be if you have a 'Department' entity and you create a 'Marketing' subtype of that Department entity and from that a MarketingEurope, MarketingUSA, MarketingAsia etc. subtypes. These last subtypes are actually instances of Marketing, not real subtypes. Avoid defining subtypes for entities which can have more than one semantic type. In NIAM/ORM you can define multiple inheritance, however in .NET you can't. (In theory, you can with interfaces, but LLBLGen Pro doesn't support interface based entity inheritance). A typical example of this pitfall is the Person Employee and Person - Customer hierarchy. If you create an Employee instance, and you also allow an Employee to be a Customer, you're in trouble, as you can't cast to a type which is a sibling in the same hierarchy. In such a situation, use a role defining field to help semantical interpretation of the 'person' entity and avoid inheritance.

Limitations of the LLBLGen Pro entity inheritance implementation.
Mixing of hierarchy types isn't supported. If you have an entity hierarchy of type TargetPerEntityHierarchy, you can't make the root of that hierarchy a subtype of an entity to form a TargetPerEntity hierarchy and vice versa. Multiple inheritance isn't supported. While .NET doesn't support multiple implementation inheritance, it does support multiple type inheritance, through interfaces. Because it's not that common (actually rather rare) to have multiple entity inheritance, LLBLGen Pro doesn't support multiple entity inheritance.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 56

Concepts - Stateless persistence
Preface
A remark on stateless development is in order, because a lot of developers misunderstand what 'state' really means in the n-tier context. By convention, all n-tier applications should be 'stateless'. This means that the total application is not in some kind of 'state' by holding information in memory, or better: the separate tiers shouldn't be in a certain 'state' where they hold information in memory. The 'state' of the application is held by the database(s) used. This model is completely the opposite of the stateful way the so called 'fat'-clients used to work: the user worked with the complete application on his machine, and the application's state was held in memory. The reason for a stateless approach is quite simple: multi-user. In a fat client, single user system, the application state was the same as the user's state, and thus it was a great performance gain to keep that user-state in memory instead of the database. For a set of black-boxes stacked on top of each other, serving more than one user, the application state is not the same as the user's state, in fact, for every user, the application can act completely different. In these circumstances, in the n-tier application world, a stateless approach is more convenient, since the logic can assume correctly what the current state of the application is (which is stored in the database) and can use other system parts, like COM+ and MTS or even webservices, without having to negotiate a current 'state'. Also, the reuse of system resources is very flexible in a stateless environment: as soon as a given object is done using another object, it is destroyed (normal objects) or returned to a pool (for example connection objects), and not held in some global store. When thousands of users are using the system, it still will perform well, because it doesn't keep thousands of objects in a global object store but shares a smaller group of objects among the ones who need them.

Stateless development and O/R Mapping
If you work with objects that hold the data of an entity, the term 'stateless' becomes a little blurred and you can't really talk about a pure stateless environment anymore: holding any entity in memory practically makes the application stateful. Still, LLBLGen Pro will not cache the in-memory objects, and therefore considers the objects in memory as temporary mirrors of the entities in the persistent storage. Every action on entities in memory has to be considered as 'stateless actions', and should be persisted a.s.a.p. As a user of the generated code, you should not cache loaded objects in memory longer than necessary, because changes made by other threads, on other machines, to the entities in the persistent storage are not seen by your thread and therefore your code can wrongly assume the entity data held by the cached object is valid, however it is out-of-sync with the real entity in the persistent storage. In the generated code you will find entity collection classes. These collection classes have methods to work with more than one entity and work directly on the persistent storage (using Adapter, you'll find these methods in the DataAccessAdapter class). This means that when you update a set of entities through a method on an entity collection object, you are actually modifying the real entity data, which is seen by all other threads and users in your application. This way you can be sure the changes are propagated to other threads in your application as well. For individual entities and objects holding their data: create the object, read the entity data into the object, use the data, and get rid of the entity object right after that. LLBLGen Pro doesn't use database locks on table rows to prevent some sort of concurrency control; every entity object is disconnected and can move around in the application freely. The application should be aware of the fact that when data is altered in an entity object, it has to propagate those changes a.s.a.p. back to the database. There is no need for panic though. In most applications there will be no problem at all, and most concurrency control problems which are related to the in-memory caching problem, can be circumvented by a variety of solutions: be it optimistic concurrency control, functional locking (locking of functionality for other users if a single user is using a harmful piece of functionality) etc.

User state

Page 57

User state is a confusing term, because in a way, it is a form of stateful programming. User state is the state of the application for a typical user, but only in such a way that the user thinks the application is in his state, most of the time meaning just GUI related data is kept in a stateful global store, typically in session related objects, which are common in the ASP and ASP.NET world. ASP developers know as a rule of thumb not to keep large objects in the 'session' object, like recordsets or open database connections. The reason for this is that the application suddenly becomes stateful even if the developer didn't intend this. In the .NET world, this hasn't changed. User state, for example the information a user has provided on page one and two on a four page wizard, is light and influences only the GUI tiers. Data meant for other tiers shouldn't be kept in the user state but should be stored in the database. As a rule of thumb use this: "if tier N (N being not the GUI layer) is changed into a stateless webservice, placed on a webserver on the internet, thus not in your local secure network, will the application still work as planned or do you have to change a lot in the application to keep it together?" When you have to change a lot, your application isn't stateless, when you don't have to change a lot, your application is pretty 'stateless' and you're well on track!
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 58

Concepts - Task based code generation
Preface
LLBLGen Pro uses a very modular code generator. In fact, it's not a code generator per se, it's a task based engine which executes tasks and one of them can be a task to generate code. When you have defined some entities or other elements in a project, you can generate the project. When you start the generating process, LLBLGen Pro presents you a window which will allow you to select a couple of things important for the generation process: the target language, the target platform, the template group to use and the presets of tasks to run for this generation cycle (which is fully configurable). This section describes briefly the tasks and preset concepts, which are stored in .tasks and .preset files resp.

Tasks, task groups, .tasks and .preset files
The generator engine works by executing tasks defined in a given order. Which tasks and in which order is defined in a so called preset, stored in a .preset file and which you can create in the Code generation configuration window's 3rd tab. Presets define a specific order of task definitions and per task the parameter values for that task. Tasks can be nested into task groups which can be nested into each other. Each task group is executed before the next one, and they are executed in the order they appear. Tasks and taskgroups are stored in .tasks files. Both .tasks and .preset files are stored in the Tasks folder of the LLBLGen Pro installation, or in the additional Tasks folder specified in the project properties. LLBLGen Pro ships with a variety of tasks and preset files so you can get started right away. If you want to create your own, you can, they're simple XML files which use .xsd schemas defined in the Tasks\Xsds folder. Which tasks and the order in which they're executed are fully controllable by you. You don't want an app.config file being generated? Remove the task from the run queue or disable it and it won't be executed.

Task performers
Each task is performed by the assembly and class specified with the task in the task definition. The generator loads the assembly specified and creates an instance of the task performer class. The task performer class has to implement a known interface, ITaskPerformer, which allows the generator to execute the task performer, and has to derive itself from a supplied base class, which already implements the interface in question. This way, creating your own task performer is very easy and can be plugged into the generator process without much effort. The complete project definition is available to the task performer class so it can do whatever it wants with it. Also, there's a global cache to store values which should be communicated to other task performers. For example, the CodeEmitter task performer of LLBLGen Pro is stored in the cache created once, so it doesn't have to load itself each time a new class has to be generated. Task performers can signal the generator to abort the complete generation process, or simply signal that they failed but that the total process should continue.

Task parameters
Each task can have an unlimited number of parameters with a name-value structure. These parameters are supplied to the task performer instance when the task is executed. The task performers shipped with LLBLGen, DirectoryCreator and CodeEmitter, use parameters to do their job. If you look at the supplied .tasks and .preset files you get an idea what's possible with the shipped task performers. For details about these parameters, how to create your own task performer class and how to tweak the generator process in general, please see the LLBLGen Pro SDK, which is a free download for LLBLGen Pro customers.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 59

Page 60

Concepts - Templates and Template groups
Preface
LLBLGen Pro ships with two template based code generators: CodeEmitter and DotNetTemplateEngine. CodeEmitter (located in the SD.LLBLGen.Pro.TaskPerformers.dll) is a template based code emitter which consumes templates written in TDL (Template Definition Language). TDL is LLBLGen Pro's own template language. DotNetTemplateEngine (located in the SD.LLBLGen.Pro.LptParser.dll) is a template based code emitted which consumes templates written in either the .NET language C# or VB.NET. Written in means the template logic is written in that language. The text which forms the output of the code generators can be in any format. The code generators are so called Task performers, which are able to perform a task as described in Concepts - task based code generation. To work correctly the code generators consume templates which are normal textfiles. Because the task performers are specified in template independent .tasks files, they use template ID's to identify which template to use for the execution of the code generator. Which file is bound to the template ID specified in so called template bindings files, which have the extension .templatebindings, and are located in the Templates folder.

Templates
LLBLGen Pro uses two kinds of templates: generic templates, or shared templates, and database specific templates. As discussed above, templates are identified by the Template ID they're bound to. A template ID can have multiple files bound to it, one per target language, per .templatebindings file. This means that you can define your own bindings in a separate .templatebindings file, to overrule a shipped binding definition. As the template ID is target language independent and also template file independent, you can use the task performers with any set of templates available. For example, when the CodeEmitter is executed it looks up the template ID, which is specified as a parameter of the task, in the list of template bindings defined. If found, it will load the bound template file and use that template as the one to work with. To learn more about using template bindings during code generation configuration, please see Designer - Generating code, tab 2.

Template files
A template file is a code file with the code that should be emitted in the file it is the template for (e.g. a template for an entity class). If it's a template which comes with LLBLGen Pro, it has the extension .template and it contains template logic based on the TDL language and is used by the CodeEmitter task performer. The TDL drives the CodeEmitter to create the output requested, for example loop through object lists or simply emit a piece of information like the name of the current entity into the output. TDL statements are not built on top of an existing language, and are solely meant for code emitting using LLBLGen Pro. For an in-depth discussion about TDL, how to create your own templates and other details related to the templates in LLBLGen Pro, please see the LLBLGen Pro SDK, which is a free download for LLBLGen Pro customers.

Included template files
LLBLGen Pro ships with two groups of template files: "SelfServicing" and "Adapter". All templates of these groups are either contained in one template directory per driver (the driver specific templates, e.g. SqlServerSpecific) or the SharedTemplates folder, all located in the Templates folder in the LLBLGen Pro installation folder. SelfServicing and Adapter templates are stored in the same folder since some template files are shared among the two template groups. Each template group generates code which is different in style, functionality and paradigm from the other. Which template group to use is selected by the template group combo box on the first tab in the generator configuration window. Per template group you can have different scenario's, like 'General' and 'Two classes', which can be selected by picking the right preset in the third tab of the generator configuration window.

Page 61

Below are short descriptions of the template groups' typical aspects, so you can better decide which template group to choose for your project.

SelfServicing
SelfServicing is the template group which was initially shipped with LLBLGen Pro and it's named SelfServicing because the entity, typed list and typed view classes contain all the logic and data to help themselves interacting with the persistent storage. This means that there is no extra object instance required, like a broker, to fetch entity data from the persistent storage or save data into the persistent storage. This can be very handy if you don't want to be bothered with the hassle of extra objects to fetch data related to an instance you hold or you want to traverse relations like myCustomer.Orders[0].Products[0].ShipperName in a class that potentially doesn't have database access like a gui. This traversal fetches data on the fly with lazy loading (load on demand) using logic that is controlled by code inside the entities itself: the entities instantiate the required DAO objects to retrieve the data and to instantiate the related objects. This is transparent for the developer and is easy to use. The presence of persistence logic inside the entity, typed list and typed view classes can in, some situations, be a disadvantage of the SelfServicing code. For example if you as a project manager do not want to have your GUI developers call myEntity.Save(), you have to add extra code to the generated classes to prevent any save action from happening and even then, the logic is exposed to the developer. The template group called Adapter solves this. In short, SelfServicing is about the paradigm that object persistence is something about the objects being persisted, so an entity object should know how to persist itself. SelfServicing's code is generated for one database type (e.g. Sqlserver / Oracle). An example piece of SelfServicing code is the following. This code reads an Order entity from the persistent storage, alters its EmployeeID field and saves it back.
C# VB.NET

// C# OrderEntity myOrder = new OrderEntity(10254); myOrder.EmployeeID = 6; myOrder.Save(); ' VB.NET Dim myOrder As New OrderEntity(10254) myOrder.EmployeeID = 6 myOrder.Save() SelfServicing comes in two variants: General and TwoClasses. The General variant, formulated in the General preset ( See: Concepts - task based code generation) generates one class per entity. The TwoClasses variant generates two classes per entity: entityNameEntityBase, which contains the generated plumbing code and entityNameEntity, which derives from that class and which is for your business logic.

Adapter
Adapter is called Adapter because all persistence actions are performed by an 'Adapter' object, similar to the DataAdapter in .NET. Adapter's entities don't have persistence information, nor do they contain persistence action logic. Because it uses adapter objects, called DataAccessAdapter, you can choose the adapter you want to use to persist the entity object when you actually need it. Adapter uses a single point where the persistence data is stored, hidden away from the developer. The DataAccessAdapter itself has access to the persistence information in code, which is at runtime constructed in a cache with persistence info objects. Adapter's generated code is not 'selfservicing', which means that you have to pull data from the database up front or consult a DataAccessAdapter object to read data again later because there is no logic inside the entity objects to control persistence actions like reading related entities from the database through lazy loading. This can be perfect for projects where it is mandatory that every database action is started and controlled by a selected piece of code, for example a selected set of Business Logic classes in a middle tier. It can however be a bit cumbersome for projects where selfservicing aspects of an entity should be utilized. It is therefore important to understand how you look at what persistence of objects means to your code. Adapter is about the paradigm that persistence is a 'service' provided by an object or group of objects and which is provided by logic outside the objects affected by the service. As an illustration of this, the same functionality provided as an example with the SelfServicing explanation above is written using the Adapter

Page 62

logic:
C# VB.NET

// C# OrderEntity myOrder = new OrderEntity(10254); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntity(myOrder); myOrder.EmployeeID = 6; adapter.Save(myOrder); ' VB.NET Dim myOrder As New OrderEntity(10254) Dim adapter As New DataAccessAdapter() adapter.FetchEntity(myOrder) myOrder.EmployeeID = 6 adapter.Save(myOrder) Adapter requires more lines of code when fetching data, however it also gives more control over the steps taken, for example it allows you to specify whether the connection should stay open so actions taken after the current action will be faster, because an open connection is already available (the connection is closed otherwise after each action), or which connection string to use for this particular action. It is therefore key to first determine what kind of paradigm you want to use before generating code. Adapter also comes in two variants: General and TwoClasses. The General variant, formulated in the General preset ( See: Concepts - task based code generation) generates one class per entity. The TwoClasses variant generates two classes per entity: entityNameEntity, which contains the generated plumbing code and MyentityNameEntity, which derives from that class and which is for your business logic. Users of LLBLGen Pro v1.0.200x.y are likely familiar with the 3rd party add-on 'AdapterExtendedEntityTemplates'. The TwoClasses variant is this add-on, but now included in the standard templates shipped with LLBLGen Pro. All template groups are fully supported and have the same basic persistence options, however one template group can have more sophisticated persistence options than the other while the other template group might have more functionality on board for ease of use. When to use which template group? Both template groups offer a wide set of functionality, but do that in a different way. This difference makes it important for you to know when to pick which template group for a given project. Below are two short lists which sum up some reasons for choosing each template group and which will help you with the decision which template group to take. When to use SelfServicing When the project targets a single database type. When you are not using a distributed scenario like remoting or webservices. When you need navigation through your object model in a databinding scenario and you want to load the data using lazy loading. When you don't need to extend the framework. When you like to have the persistence logic into the entity classes. When you do not require fine grained database access control, like targeting per call another database. When to use Adapter When the project targets multiple database types or would be able to do that in the future. When you are using a distributed scenario like remoting or webservices. When you don't need lazy loading scenarios in relation traversals. You can still navigate relations but you have to fetch the data up-front. When you need to extend the framework with derived classes. When you like to see persistence logic as a 'service', which is offered by an external object (broker, adapter). When you require fine grained database access control. When you want to convert Entities to XML and back and performance and XML size are important.

Page 63

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 64

Concepts - Database drivers
Preface
The LLBLGen Pro system uses specific database drivers per database vendor, using the 'provider model'. The drivers are used by the designer. The generated code doesn't use these drivers however, instead it uses Dynamic Query Engines (DQE's). It ships with the SqlServer 7/2000/2005/2008 driver, Oracle 8i/9i driver (ODP.NET), Oracle 10g/11g driver (ODP.NET), Oracle 8i/9i/10g/11g driver (Microsoft Oracle provider), MySql 4.x/5.x driver, IBM DB2 7.x/8.x /9.x driver, Firebird 1.x/2.x driver, PostgreSql 7.4+/8.x driver, Sybase ASE driver, Sybase ASA driver and Microsoft Access 2000/XP/2003/2007 driver. A database driver is used to connect to a database server and to retrieve all available schema information for the credentials and connection information specified. This information is then stored in LLBLGen Pro's own format which is used to define all other elements in a project which are based on schema information or which link to schema information. This way LLBLGen Pro can be used with every database system that has an LLBLGen Pro database driver.

Database driver's tasks
The database drivers have a well defined set of tasks: Providing connectivity with the database server for the designer; Providing ways to construct connection strings Converting database types to different .NET types Retrieval of catalog names (if applicable) Retrieval of schema definitions inside a catalog / database / owner schema. This will return all tables, views, stored procedure metadata and their details like table fields and parameters, foreign key constraints and unique constraints Refresh of schema information. The schema meta data produced by the database driver is serialized into the project, which makes a project connection independent: the complete schema set and all information a driver can provide is saved in the project. When a user wants to work with a project, there doesn't have to be a connection with the database. Only when the schema set has to be refreshed, the user has to re-connect to the database.

Supported features per database driver
Each database driver supports a variety of features and these are listed below.

SqlServer
SqlServer 7: All features of SqlServer 7, including user defined types (type synonyms). SqlServer 2000: Scope_identity() and all SqlServer 2000 specific types. SqlServer 2005: Xml datatype, Varchar(MAX) and Varbinary(MAX), all database constructs, including User Defined Types (UDTs) written in .NET, except XQuery queries. SqlServer 2008: All SqlServer 2005 features as well as the 4 new types: Date, DateTime2, DateTimeOffset and Time. Multiple catalogs per project.

Oracle (ODP.NET)
Oracle 8i (8.1.7) or higher. Tables, views, sequences and procedures. Synonyms for tables, views and sequences. REF CURSOR output parameters in procedures. All native Oracle types, including *LOB and synonyms for types like the type INT. XMLType support on 9i and higher.

Page 65

All joins are non-ansi by default (10g/11g use ansi joins by default), to support 8i. Configurable to use ansi-joins through config file settings Multiple schemas per project No support for user defined types. Required for Oracle 8i/9i: ODP.NET 9.2.0.x (free download from Oracle), Oracle client 9.2 (requirement for ODP.NET), Oracle 8i (8.1.7) or higher. Required for Oracle 10g/11g: ODP.NET 10.x (free download from Oracle), Oracle client 10.2 (requirement for ODP.NET) or higher, Oracle 10g.

Oracle (Microsoft Oracle provider)
For supported features, see Oracle (ODP.NET). The Microsoft Oracle Provider is available and supported on .NET 1.1 or higher. Microsoft's Oracle provider requires an Oracle client present on the system, please consult the Microsoft Oracle provider documentation in the .NET reference manual (System.Data.OracleClient). Restrictions in the Microsoft Oracle provider: XMLType is not supported by LLBLGen Pro and the MS Oracle driver, as that type isn't supported by the Microsoft Oracle provider. If you need to use this type, use the ODP.NET version of the Oracle driver. All NUMBER(x, y) types are seen as System.Decimal. This can be a huge disadvantage. In that case, consider using an ODP.NET based driver

Firebird/Interbase
All features of Firebird 1.5 and higher, except array types. Dialect 3 only. .NET 1.1 or higher Required for Firebird: the Firebird.NET provider by Carlos Alvarez, available at the sourceforge download site for Firebird. The driver is built against the latest FireBird.NET provider v2.0 build of the Firebird.NET provider. The .NET 1.1 DQE you'll use in your .NET 1.1 code is compiled against the Firebird.NET provider v1.7.1 for .NET 1.1. The .NET 2.0+ DQE is built against the Firebird.NET provider v2.0.1.0 for .NET 2.0.

PostgreSql
All features of PostgreSql 7.4 or higher, except array types. All datatypes supported by Npgsql are supported. Required for PostgreSql: the Npgsql provider, available at the PostgreSql website. LLBLGen Pro comes with the Npgsql .NET provider dll for .NET 2.0 as the Npgsql provider doesn't install itself in the GAC. To use the generated code in your own project, be sure to either download the latest Npgsql .NET provider for .NET 1.x or .NET 2.0, depending on the target platform of your code.

Microsoft Access
All features of MS Access 2000, except parameterized stored queries, so no stored procedure calls Database passwords, security files are supported. Limiting the number of objects to return in a query requires the primary key field(s) to be added to a sort clause if a sort clause is specified. LLBLGen Pro transaction savepoints aren't supported. Required for Microsoft Access: OleDB driver for Jet v4.0, which is shipped with MDAC 2.5 or higher (a requirement for .NET so these are installed) and a .mdb file in the MS Access 2000 format or higher.

IBM DB2 UDB
IBM DB2 UDB v7.x/8.x/9x. Tables, views, sequences, identity columns and procedures. All native IBM DB2 UDB types, including *LOB. No support for user defined types. No support for table / view aliasses. A table / view alias is a public alias for an existing schema table and

Page 66

defined as such in the schema. Aliasses for tables in queries are supported. No support for iSeries DB2 installations. Required for DB2: IBM DB2 .NET provider, shipped with the latest ClientAccess version, also available through the DB2 personal edition installation, or through the IBM website for DB2 licensees.

MySql
MySql v4.x (4.1 or higher with InnoDB preferred) or v5.x Tables, identity columns, primary keys, unique constraints, subqueries All native MySql types except SET and ENUM (will be converted to VarChar). v4.x: No support for database defined foreign keys. Foreign key constraint meta-data defined in the database isn't read by the MySql driver from a v4.x MySql database. No support for stored procedures. Required for MySql: CoreLab's MySqlDirect.NET provider v4.x. LLBLGen Pro supports MySqlDirect.NET v3.55.x, see note below.

Note: Support for MySql using the CoreLab MySqlDirect provider v3.55.x has its own driver dll, please copy the driver dll from the LLBLGenPro Installation Folder\Drivers\MySql\v355 folder into the LLBLGenPro Installation Folder\Drivers\MySql\ folder overwriting the v4 using driver and restart the LLBLGen Pro designer.

Sybase Adaptive Server Enterprise (ASE)
All features of Sybase Adaptive Server Enterprise v12.x+ are supported except Java based types and proxy tables Single catalog per project Floats with precision < 16 are mapped on System.Single, floats with precision >= 16 are mapped on System.Double Output parameters for procedures aren’t recognized, as Sybase ASE doesn’t store this information in the meta-data, so output parameters are always seen as input parameters. Grouped (overloaded) procedures are supported. The Sybase ASE’s client class AseParameters collection isn’t CLS compliant, which can cause compile problems where ClsCompliancy is forced through the assemblyinfo file. numeric identity (Identity columns) are always set to DBType ‘int’. Required for Sybase ASE: The latest ASE ADO.NET provider from Sybase, v1.1.5. This provider is compiled against .NET 1.1 and therefore there's no .NET 1.0 support.

Sybase iAnywhere (ASA)
All features of v8.x or higher are supported except Java based types and proxy tables Single catalog per project .NET 2.0 or higher is supported. iAnywhere 8 or higher is supported. Owners ‘SYS’, ‘dbo’, ‘SA_DEBUG’, ‘rs_systabgroup’ are filtered out. (Long)varbit bitarrays are mapped to strings, as the iAnywhere provider does that too. Users should specify the database service name for the service to connect to, not the server name (or IP address) the database service runs on. Required for Sybase ASA: The latest iAnywhere ADO.NET provider from Sybase for .NET 2.0, v10.0.1.34152 or higher.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 67

Page 68

Concepts - Dynamic SQL
Preface
LLBLGen used stored procedures and for the successor, LLBLGen Pro, stored procedures were considered as well. However, because one of the design goals was to be database independent and we wanted to make it easy for the users of the generated code to produce queries on the fly and query for entity objects in a flexible manner, stored procedures produced a problem: their interface is set in stone and when you want to do something that is not programmed in the available set of procedures, you have to add another procedure.

Dynamic SQL and Dynamic Query Engines
The solution is to use dynamic SQL, which means all the queries are generated in real time. These SQL queries are fully parameterized and are created in a Dynamic Query Engine (DQE). Each database driver comes with its own DQE, which is tailored to create queries especially for that particular database. Because today's RDBMS's are fully equipped with an optimizer that will keep execution plans of queries executed, including queries not formulated in a procedure, you don't loose any speed when it comes to execution and query text compilation. Because dynamic SQL contains only those statements which are relevant to the action to perform, in a lot of cases the dynamically created query is faster than a stored procedure: an update with just one SET statement is faster than an update setting all fields to new values, because then you only have to write one stored procedure.

Flexibility
When a table has 2 or more foreign key constraints, and thus the related entity has 2 or more m:1 relations and 1 or more m:n relations, creating stored procedures for all possible filters on that table for entity retrieval is a tedious task, because it can become quite a lot of procedures: for every combination of foreign key fields you have to write a different select statement. As a work around, you can use parameters which can be NULL in a stored procedure. However you then have to place every parameter in the WHERE clause, even if just one parameter is not NULL, and use COALESCE to test for NULL. These kinds of queries are slower than their dynamic counterparts. See for analysis, test code and comments the following links: Stored procedures vs dynamic queries Benchmark code The dynamic nature of the DQE in LLBLGen Pro also has the advantage of being able to construct SQL using normal program constructions, like classes and enumerations. Moreover, these queries are then database independent, because they use constructions formulated in the LLBLGen Pro runtime library, like predicate expressions. See the section about Using the generated code for details about how to write dynamic queries in your own code, or look into the code that is produced by the Typed List templates.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 69

Concepts - Type Converters
Preface
LLBLGen Pro creates the entity definitions and entity field definitions from the table / view definitions available. This is a very productive technique to get a lot of entities defined in a very short time. The resulting entity and its fields are often meeting the user's requirements, but sometimes names have to be altered, and also .NET types of the entity fields are not the ones the user expected. For name construction, LLBLGen Pro offers patterns which produce names for, for example, fields, but what about the .NET types? By default, LLBLGen Pro uses the used database driver to produce a .NET type for a field based on the database type definition of the target field (table/view field). This .NET type is equal to the .NET type of the value returned by the datareader of the .NET provider for that database. For example take a field in Oracle with database type NUMBER(6,0). A given row's value for that field will be returned as a System.Int32 typed value. Would the type have been NUMBER(4,0), the type of the value would have been System.Int16. So an entity or typed view field mapped onto a table/view field with type NUMBER(6,0) will get the .NET type for that database type, which in this case is System.Int32. The problem with this system is that it doesn't allow the user to specify a different .NET type for a field. Take the same Oracle database for example. Oracle doesn't have a boolean type (most databases don't have a boolean type). This is often solved by using a NUMBER(1,0) (single digit numeric value) type, however the entity field or typed view field will get the type System.Int16, not System.Boolean. The LLBLGen Pro designer didn't let the user define a different type, because that would require a conversion between the .NET type reported back by the datareader (in this case System.Int16) and the actual type of the field (System.Boolean), and vice versa: in filters and when the entity is saved, the boolean value has to be converted to a numeric value to be usable in the database system. This is solved by specifying a type converter for a field. A type converter is a class which can convert from a set of .NET types to a given .NET type (can be any .NET type) and convert from that .NET type to a given set of .NET types. The type converter is defined on the field and works under the hood, the .NET type of the field holding the type converter becomes the same as the core .NET type of the type converter and everything else is taken care of by LLBLGen Pro: the .NET type provided by the ADO.NET provider used is converted to the new .NET type of the field by the set type converter and vice versa.

Using a type converter
Lets look at our example again, the absense of a boolean type in some databases. LLBLGen Pro v2.0 comes with a type converter which converts from a non-fractional numeric type (byte/sbyte/int16/uint16/int32/uint32/int64/uint64), the 'From' type, to boolean, the 'Core' type, and vice versa. All entity fields which have originally a .NET type which is one of the 'From' types can have this type converter set as their type converter. When a field of such a From type is set to have this particular type converter, its .NET type changes to the 'Core' type of the type converter, in this case System.Boolean. Generating the project will result in an entity which now has System.Boolean as type for the entity field in question. This allows the user to set the field to 'true' or 'false', instead of 0 or 1. Also filters created for this field work with 'true' or 'false', not with 1 or 0.

Under the hood
Every field has persistence info, which is in fact the mapping information of the field, on which database field it's mapped, what the characterstics are of the database field etc. etc. This persistence info also contains an instance of the set type converter (if the field has one set, otherwise it's null/Nothing). When an entity's data is read from the database, a field which is about to get a value from the data read is checked if it has a type converter instance set. If that's the case, the type converter is asked to convert the value from the read data and the conversion result is stored in the entity field. If no type converter is set, no conversion takes place. The other way around is the same: if an entity is saved, right before the field's value is stored

Page 70

inside a parameter to be used in an INSERT or UPDATE statement, a set type converter is asked to convert the value back, to the original .NET Type of the field, in our example to Int16. For filters, this mechanism is also followed. This is completely transparent and very efficient: the application developer has no notice of the usage of the type converter.

Note: if you're using a type converter in your project, be sure the generated code references the assembly the type converter is in. See: Generated code - compiling the code

All .NET types supported
The usage of a type converter doesn't stop with converting integers into booleans and vice versa. You can use it to write a converter for any .NET type. For example you can write a type converter to convert a string, which contains an XML document in text, to an XmlDocument and vice versa. Or a type converter which converts a byte array into a JPEG image and vice versa. This is then done under the hood, transparently. To write your own type converter, please consult the SDK documentation for further details and the sourcecode of the type converters shipped with LLBLGen Pro, which is also enclosed in the SDK package.

Type converter structure
Type converters are used in both the LLBLGen Pro designer and at runtime by the generated code. It is therefore essential the class is based on a type which is known by both the designer and the runtime libraries, as the type converter is used by the designer and by the generated code and runtime libraries at runtime. To avoid having to reference runtime libraries in the designer and designer assemblies in the runtime libraries, a type known to both systems was important. This is why a type converter is a derived class of System.ComponentModel.TypeConverter, a class of the .NET framework. LLBLGen Pro at startup probes any assembly in the TypeConverters folder in the LLBLGen Pro installation folder for types derived from System.ComponentModel.TypeConverter. All these types are seen as type converters and stored in the designer internals and are enlisted in the entity editor and typed view editor when a field is selected.

Type conversion definitions
In a large project, setting a lot of fields to a given type converter can be a cumbersome task. Also, when a catalog is refreshed or when a new entity is added, you probably want type converters to be set to given fields automatically. For this LLBLGen Pro defines type conversion definitions. Type conversion definitions specify filters which can be used to set a type converter to a large group of fields at once. Per project you can define type conversion definitions, as much as you want. Each type conversion definition works on a given real .NET type, which is selected from the list of .NET types produced by the used database driver. Based on that .NET type, LLBLGen Pro offers you the list of known type converters which accept that type as a From type. You can then fine-tune your filter on database type, precision, scale and length, all are optional. The preferences of LLBLGen Pro offer you to specify if a type converter should be assigned automatically to a new field, using the "AutoAssignTypeConverterToNewField" setting (inherited by a new project, so on an existing project, set the property there as well). When this setting is set to true, LLBLGen Pro will for each new field (for example created by the refresher or when a new entity is added) try to find a type conversion definition matching the field's .NET type and database type definition. The match with the most matching elements is selected and the type converter defined in that type conversion definition is set as the type converter for that new field. LLBLGen Pro also offers a plug-in to quickly set type converters on a set of entities / typed views, using the type conversion definitions defined in the project. You can define / edit the type conversion definitions of a project by selecting 'Edit Type Conversion Definitions...' in the context menu of the project node in the project explorer or from the Project menu of the LLBLGen Pro designer. For more information about defining type conversion definitions, please see Defining type conversion definitions in the designer section of this manual.

Page 71

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 72

Concepts - Dependency Injection and Inversion of Control
Preface
Using the generated code produced by LLBLGen Pro is using a framework, the LLBLGen Pro framework. To fully utilize the potential of the framework, it is important that you can extend the framework by filling in the blanks left open for that purpose. For example, if you want to add validation to the framework, you should be able to do so without having to write a lot of code to make validation happen at runtime. The same goes for authorization or auditing for example. To be able to do that, the LLBLGen Pro framework uses a mechanism called Inversion of Control or in short: IoC. Inversion of control is the simple idea of solving a dependency of class X on class Y, not from within X but from outside X. A typical LLBLGen Pro example is an entity validator class, derived from ValidatorBase (see for more details Using the generated code - Validation ). Say you have a Customer entity and you've written a CustomerValidator class. You now want to instantiate a CustomerEntity instance and set its Validator property to an instance of CustomerValidator so validation of the data inside the CustomerEntity instance is performed by the CustomerValidator instance. What's important now is how this Validator property is set to an instance of CustomerValidator. This is what's discussed in this section. What's discussed below is illustrated with the entity validator concept. The same applies to entity auditors, entity authorizers and concurrency predicate factories for entities. You can also use the mechanism for your own properties in entity classes you added yourself.

Inversion of Control (IoC) by using Dependency Injection (DI)
Let's look at our example again: the CustomerEntity instance, let's name that instance C and the CustomerValidator instance, let's name that instance V. For the application you're writing, C has a dependency on V, as C needs the validator V to perform validation, or better: let the framework perform all kinds of validations at runtime by calling into V. Though it might be that C doesn't need V in particular, but it can also use another validator for a customer entity class which has slightly different rules, SpecialCustomerValidator. Let's call the instance of that class in our example SV. Because LLBLGen Pro uses inversion of control (IoC) for validators, authorizers etc., you're able to select which validator you want to use for C, namely V or SV, without changing the code for C, as the dependency of C on its validator to use isn't defined inside C, but outside C. With outside C is meant: any outside source can set the validator of C. This gives you the freedom to use a separate mechanism to set the validator for C, by injecting the validator at runtime when C is instantiated. This injecting is called Dependency Injection (DI), as it injects an object Y into an object X where X depends on, which is simply setting a property on X to the value Y. That all might sound complicated but it's actually very simple: Given our example with the CustomerEntity instance C and the two validator objects V and SV, we can use a Dependency Injection (DI) mechanism to inject either V or SV into C at runtime, which comes down to set the property C.Validator to either V or SV. For .NET there are several DI frameworks available to perform this injection at runtime for you: StructureMap, ObjectBuilder, Spring.NET or the Castle Project to name a few. One of the things these frameworks all have in common is that they use a factory which builds the entity. So instead of using a normal object instantiation statement with the new keyword, you'll call a factory and it will return the object you requested, injecting all objects to inject for you. It can be inconvenient not to be able to use the new keyword and always have to call a factory, as not using the factory will bypass the injection mechanism and not set the objects you want. Also you'll need to use another framework to do the dependency injection for you. To help you get up and running without learning another framework, LLBLGen Pro supports its own Dependency Injection mechanism. If you're comfortable with the 3rd party frameworks for dependency injection, you're free to use these, as the LLBLGen Pro framework doesn't rely on its own DI mechanism to function properly: you can inject validators, authorizers etc. into entities with these other frameworks just

Page 73

fine and the injected objects will function normally. The LLBLGen Pro DI mechanism is discussed in the next paragraph.

LLBLGen Pro's ways to inject dependent objects into entities
LLBLGen Pro supports a couple of different ways to inject objects entities depent on into entity objects at runtime. One is available in LLBLGen Pro for quite some time now and one is new starting in v2.5. 1. Overriding a Create method to create instances at runtime. This mechanism is available in LLBLGen Pro since v1.0 and it allows developers to write code to insert objects at runtime into entities. For example, one could override in a partial class the CreateValidator method to create a validator object when the entity class is instantiated. One could also use the override of this method to call into a factory to produce the validator for the particular context. See the LLBLGen Pro reference manual for details about EntityBase and EntityBase2 and which methods are available to you in these classes (Which are the base classes for entities in resp. SelfServicing and Adapter) and what their purposes are. 2. Using the Dependency Injection mechanism build into LLBLGen Pro. This mechanism was first introduced in v2.5 of LLBLGen Pro and it defines a mechanism where developers could write validator classes , authorizer classes, ConcurrencyPredicateFactory classes etc. in a separate project without any ties to the generated code, of which instances were injected at runtime into entity instances, without the necessity of overriding methods in partial classes of the entity classes. When an entity class is instantiated, be it through a factory or with the new keyword, the LLBLGen Pro framework will automatically find the instances to inject and will perform the injection for you. The DI mechanism in LLBLGen Pro is used for injecting objects into entities: entity classes are prepared to get the objects they depend on injected by the DI mechanism, other classes are not. This doesn't mean you can't enable these classes to use the DI mechanism, because you can. All you have to do is call the DependencyInjectionProvider to inject the objects the instance relies on, from the constructor of the class. Do this with the following code: (For entities, you don't have to do anything, it's been taken care of for you. Only use the following code if you want to use the LLBLGen Pro DI mechanism to inject objects into classes not yet prepared for DI).
C# VB.NET

// C# DependencyInjectionInfoProviderSingleton.PerformDependencyInjection(this); ' VB.NET DependencyInjectionInfoProviderSingleton.PerformDependencyInjection(Me) After using this line of code in the constructor of your class, you can use code like:
C# VB.NET

// C# MyClass c = new MyClass(); ' VB.NET Dim c As New MyClass() and c will get all objects to inject into an instance of MyClass injected. For entities, you don't have to do anything, it's been taken care of for you. To setup and use the LLBLGen Pro Dependency Injection mechanism and for example how to use Dependency Injection Scopes, please see the section Setting up and using Dependency Injection in the Using the Generated Code section to get started.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 74

Page 75

Using the designer
Preface
LLBLGen Pro comes with a visual application, the designer, which is the application you start by starting LLBLGenPro.exe. The designer allows you to create and design projects, and control the generation of source code from a project. This section guides you through a series of different aspects of the designer: which elements you will find in the GUI and how to perform which tasks to construct the project that will, in generated form, form the correct business facade tier for your .NET application.

The GUI elements This section discusses the different elements in the gui and their purposes. Preferences and Project properties This section discusses the LLBLGen Pro preferences and project properties. Creating a project This section discusses how to create a new LLBLGen Pro project. Adding and editing entities This section discusses the aspect of adding a new entity definition or sets of entities, and it describes the entity definition editor. Adding custom relations This section discusses the two editors to add custom relations (relations which are not based on an existing foreign key constraint). Adding and editing typed lists This section discusses how to add a new typed list definition, and describes the typed list editor which is used to construct typed lists. Adding and editing typed views This section discusses how to add a new typed view definition and the typed view editor. Adding and editing stored procedure calls. This section discusses how to add a new stored procedure call definition to the project, as well as the stored procedure call editor used to alter call properties. Defining type conversion definitions. This section discusses how the user can define type conversion definitions to quickly apply type converters to fields. Working with plug-ins. This section discusses how the LLBLGen Pro plug-in system works and how to execute and examine the plug-ins installed. Refreshing the catalog schemas This section discusses what catalog refreshing does, how it is started and how you should interpret the feedback reported by the refresh process. Generating code This section describes the generation of source code from a project definition. It describes the different GUI elements used in the whole generation process and how to use them to make the generation process a success.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 76

Designer - GUI elements
The Gui Elements Explained
The LLBLGen Pro designer is a true .NET application, which can run by itself: it doesn't have to be hosted in an IDE. The designer consists of various different elements which are described briefly below. The complete GUI is made context aware and is direct, which means that when you change anything and there is no 'Cancel' button, the change is made directly. The complete GUI is then kept in sync: changing a name of an entity will propagate that change to all other GUI elements which show that name.

The LLBLGen Pro Designer

In the picture above four main areas are marked. These areas are explained below. Besides these areas, the GUI contains a menu bar with options you can execute and a status bar, which shows you information about the current project. The menu at the top shows general options, it doesn't contain element specific menu options. These are shown in context menus when you click with the right mouse button (RMB) on an element in, for example, the project explorer. The areas 1, 3 and 4 can be dragged around in the GUI, they're fully dockable. The GUI preserves the layout during sessions: the next time you run LLBLGen Pro, the areas are located at the same location as when you closed the application the last time. When you run LLBLGen Pro for the first time, it will put the areas on their default position and you can then re-position them inside the designer. 1.

Page 77

1. Project explorer. When a project is loaded, it is visualized in the project explorer. It allows you to browse the complete project, all elements in the project and, via context menus which can be visualized by clicking on various elements using the RMB, you can add new elements, open editors or sort lists of elements. The project explorer can be docked to all sides of the designer window and can be slid to the side of the window by unpinning the window. The project explorer has a checkbox at the top which allows you to show or hide elements which are marked as hidden. Elements which are marked as hidden could be relations and fields mapped onto relations. If you don't want elements of these types which are marked as hidden to clutter your project explorer, uncheck this checkbox. 2. Tab area. In this area, all editors are opened as well as some detail windows used by the Catalog explorer or for example the entity overview list. You can browse through the tabs via Ctrl+TAB and Ctrl+Shift+TAB, or via the Windows menu. To close tabs, use the close button at the top right of the tab area or use Ctrl+F4. The tab-area is single-row, with the VS.NET 2003 tab scroll buttons at the top right. Some editors opened in this area have tabs themselves, as shown in the screenshot above. These sub tabs are located at the bottom of the tab. 3. Catalog Explorer. This window lets you browse the complete schema set included in the project. When you create a project, the complete schema set targeted by the project is embedded in the project itself, so you don't need a connection with the database. You can use the catalog explorer to see every element in the schema set loaded. When you click on an element, the catalog detail viewer is opened in the tab area showing the details about the element you selected. The catalog explorer has a viewing function mostly, however it allows you to change a few elements, like renaming a schema and changing the amount of resultsets returned by a stored procedure. Access these functionalities by selecting and clicking these elements with the right mouse button to open the context menu of that particular element. 4. Application Output. This window is used by several components in the designer to list output to the user. It has a checkbox for Verbose output or not, a setting which can be controlled via a setting in the application preferences. It's best to minimize this window away at the bottom by clicking its pin button to unpin it, which is the default position.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 78

Designer - Preferences and Project properties
Preface
LLBLGen Pro can be controlled by two types of settings: User Preferences and Project Properties. User Preferences are stored per user and control the designer and initial settings for new projects. Project Properties are settings specific for the loaded project and are stored in the project file. Once the project is created, the project properties are independent of the user preferences, so changing the user preferences has no effect on the settings in a project. This makes it possible to share project specific settings among multiple developers. LLBLGen Pro v2.5's preferences file is a different file than the previous designers use, so you can keep LLBLGen Pro v2.5, LLBLGen Pro v2.0 and LLBLGen Pro v1.0.200x.y installed side-by-side using different preferences.

Note : Some of the settings below discuss pluralization and singularization of names. Pluralization and Singularization takes place if a plug-in has been bound to the designer events for name pluralization and singularization. See Working with plug-ins for more details.

User Preferences
It is key to first set the User preferences prior to creating any projects, because a new project gets its initial values for several properties from the current user preferences. To access the user preferences dialog, select File -> Preferences... in the menu. Below is an example of how the dialog looks like. The values shown aren't the default values per preference. For the default value for a preference setting, please see the specific preference description below.

Page 79

User preferences dialog

General settings
Preferred project folder . The folder where you want to store LLBLGen Pro project files. This folder will be preselected when you create a new project. You can browse to the folder by clicking the '...' button. Preferred destination root folder . The preferred destination root folder in which all the files and directories are created by the code generator. You can browse to the folder by clicking the '...' button.

Catalog Refresher specific settings
AddNewElementsAfterRefresh When set to true, any new entities, typed views and stored procedures are added to the project automatically after a catalog refresh has been completed. Default is false. AddNewFieldsAfterRefresh When set to true (default), any newly found, unmapped field in an entity's target, which aren't removed previously from the entity, is added as a new entity field to the Entity automatically after a catalog refresh has been completed, except if the entity is in a TargetPerEntityHierarchy hierarchy and not the root of the hierarchy. If the entity is in a TargetPerEntityHierarchy hierarchy and the new target field is not nullable, it's added to the root entity only, if this setting is set to true. AddNewViewsAsEntitiesAfterRefresh When set to true, for each new view found in the catalog(s) a new entity will be added to the project automatically, after a catalog refresh has been completed. Default is false. This option is ignored if AddNewElementsAfterRefresh has been set to false as well. CreateBackupBeforeRefresh When set to true, LLBLGen Pro will backup your current project before

Page 80

refreshing any catalogs. This is the default and recommended setting. Default is true. DriverCommandTimeout The value of this setting controls the timeout value of the ADO.NET commands used to retrieve meta data from the database when a project is created or when a catalog is refreshed. Specified in seconds. Not used with Firebird as Firebird doesn't support command timeouts. Default is 30. HideManyToManyRelationsOnCreation When set to true (default is false), LLBLGen Pro will mark every new M:N relation created as 'hidden'. This can be helpful if your project has a lot of M:N relations which aren't really used. To unhide these relations, use for example the 'Relations' tab in the Entity editor by multi-selecting the relations you want to mark un-hidden. ManuallySelectRenamedTargetsAfterRefresh When set to true, (default is false) LLBLGen Pro will show you a dialog in which you can select the targets for entities for which the catalog refresher has detected that the targets aren't present in the new catalog, and are probably renamed. Overruled by an unattended refresh, in which case the catalog refresher will use the default: false. ShowReportAfterRefresh When set to true, the refresher will show a report after the refresh has been completed with the changes it could detect. For new users, it's recommended that they keep this option set to true. Default is true. SyncMappedElementNamesAfterRefresh When set to true (default is false), LLBLGen Pro will rename any entity, field mapped on relation, typed view, entity field and typed view field if the name of the element they're mapped on has changed, for example a table field is renamed. Setting this option to true can break your own code, so use this option with care. SyncRenamedFieldElementsAfterRefresh When set to true (default is false), LLBLGen Pro will sync manually renamed field elements after a refresh if SyncMappedElementNamesAfterRefresh is set to true. If SyncMappedElementNamesAfterRefresh is set to false, this setting is ignored. Field elements are: Entity fields and fields mapped onto relations. UpdateCustomPropertiesAfterRefresh When set to true, any custom property in an entity, entity field, typed view, typed view field or stored procedure will be updated with a similar named custom property in the newly catalog information, after a refresh of the catalog(s). Setting this option to true can break your own code, so use this option with care. If you have set RetrieveDBCustomProperties to false, this option has no effect. Default is false. VerboseRefresh When set to true, LLBLGen Pro will show a warning before the refresh will start and will pop up the backup filename if applicable. It's recommended for new users to keep this setting set to true. Default is true.

Designer behavior specific settings
AddtionalPluginsFolder The additional plug-ins folder to load plug-ins from. This is an absolute path. ChangedElementBackColor When an entity, typed list, typed view or stored procedure call definition is changed, its node's back color in the project explorer will change color to the color set as ChangedElementBackColor. Default is Color.OldLace. ChangedElementForeColor When an entity, typed list, typed view or stored procedure call definition is changed, its node's text color in the project explorer will change color to the color set as ChangedElementForeColor. Default is Color.DarkSlateGray. ConfirmDesignerClose When set to true, the user has to confirm a closure of the LLBLGen Pro designer. Default is false. CreateBackupBeforeRunningPlugin When set to true (default), a backup of the current project is created before you run a plugin. It is recommended you leave this setting set to true. Default is true. DefaultBackupFolder The default backup folder in which LLBLGen Pro will create backups of the project loaded, for example before a refresh is performed. Specify the folder with full path or if you want to make the path relative to the project location, specify the path as a relative path. A relative path starts with '.\' or with '..\' (without the quotes). If you leave this preference empty, the default, the folder the project file is located in, is used. FieldsOnRelatedFieldAreReadOnly When set to true, new fields mapped onto related fields are set to readonly. Default is false. IgnoreSystemElementsInCatalogs When set to true, system views and system tables in catalogs are ignored when you retrieve new entities and/or views. Default is false. ProjectExplorerConfirmElementDelete When set to true, each delete action performed in the project explorer has to be confirmed. Default is true. ProjectExplorerOpenElementOnDoubleClick When set to true, double-clicking an element node (entity / typed list / typed view / stored procedure call) will also open the element in its editor. The element's node is also expanded/collapsed. When set to false, double-clicking a node in the tree will just expand/collapse the node. Default is false. ShowInScreenHints When set to true, the various edit windows, which are shown as tabs in the center area, will show the helper hints at the bottom of the windows. Default is true.

Page 81

Miscellaneous settings
VerboseApplicationOutput When set to true, the application output window's verbose setting will be set to true at application startup. Default is true.

Name construction specific settings

Note : Some of the settings below discuss pluralization and singularization of names. Pluralization and Singularization takes place if a plug-in has been bound to the designer events for name pluralization and singularization. See Working with plug-ins for more details.

EnforcePascalCasingAlways When set to true, the setting MakeElementNamePascalCasing is always enforced. When set to false, the setting MakeElementNamePascalCasing is enforced only when names for new elements are created. Default is true. EntityFieldNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the entity field's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. A new project will inherit this value. Default is {}{} EntityNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the entity's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. A new project will inherit this value. Example: prefix strip pattern tbl_ and suffix strip pattern _dev will form the strip pattern {tbl_}{_dev}. Default is {tbl_}{} FieldMappedOnManyToManyPattern The pattern which is used to construct the names for Fields mapped on m:n relations. Pattern elements can be: {$StartEntityName} for the name of the start entity, {$EndEntityName} for the name of the end entity, {$IntermediateEntityName} for the name of the intermediate entity, $P or $S suffix to entity name macros to pluralize or singularize them (example: {$EndEntityName$P}), {$StartEntityFieldNames} for all the names of the fields of the relation in start entity, {$EndEntityFieldNames} for all the names of the fields of the relation in the end entity and any literal text. An element can be mentioned more than once. A new project will inherit this value. FieldMappedOnOneManyToOnePattern The pattern which is used to construct the names for Fields mapped on m:1 or 1:1 relations. Pattern elements can be: {$StartEntityName} for the name of the start entity, {$EndEntityName} for the name of the end entity, $P or $S suffix to entity name macros to pluralize or singularize them (example: {$EndEntityName$P}), {$StartEntityFieldNames} for all the names of the fields of the relation in start entity, {$EndEntityFieldNames} for all the names of the fields of the relation in the end entity and any literal text. An element can be mentioned more than once. A new project will inherit this value. FieldMappedOnOneToManyPattern The pattern which is used to construct the names for Fields mapped on 1:n relations. Pattern elements can be: {$StartEntityName} for the name of the start entity, {$EndEntityName} for the name of the end entity, $P or $S suffix to entity name macros to pluralize or singularize them (example: {$EndEntityName$P}), {$StartEntityFieldNames} for all the names of the fields of the relation in start entity, {$EndEntityFieldNames} for all the names of the fields of the relation in the end entity and any literal text. An element can be mentioned more than once. A new project will inherit this value. FieldMappedOnRelatedFieldPattern The pattern which is used to construct the names for Fields mapped onto a related field. Pattern elements can be: {$RelatedEntityName} for the name of the related entity which contains the mapped related field and {$RelatedFieldName} for the name of the field in the related entity which is mapped by the field mapped onto a related field. You can also specify any literal text. An element can be mentioned more than once. A new project will inherit this value. MakeElementNamePascalCasing When set to true, all names of new entities, entity fields, typed views etc. will be properly PasCal cased. This means that each character in the name is lowercased, except the first character after each word boundary ('_' or ' ') and the first character. All spaces are always removed. When set to false, the name is left untouched, except for the first character, which will always be UpperCase. A new project will inherit this value. Default is true. RemoveUnderscoresFromElementName When set to true, all single underscores in names of new entities, entity fields, typed views etc. will be removed. When set to false, the name is left untouched. A new project will inherit this value. Default is true. StoredProcNameStripPattern The pattern which contains two sections, enclosed in {}, one for the

Page 82

prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the stored procedure's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. A new project will inherit this value. Example: prefix strip patterns pr_ and sp_ and suffix strip pattern _dev will form the strip pattern {pr_, sp_}{_dev}. Default is {pr_, sp_}{} TypedViewFieldNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the typed view field's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. A new project will inherit this value. Default is {}{} TypedViewNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the typed view's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. A new project will inherit this value. Example: prefix strip pattern vw_ and suffix strip pattern _dev will form the strip pattern {vw_}{_dev}. Default is {vw_}{}

Output specific settings
EncodingToUse The encoding to use for text files being written by the generator task performers. Use UTF8 if you use Visual SourceSafe, as Visual SourceSafe can't handle unicoded textfiles. Default is UTF8 PreferedNamespacePrefix The prefix that is inserted before the initial root namespace. Typically this is your company name, where the initially root namespace is constructed from the catalog name. You can change this in the generator configuration window. Do not include a '.' suffix. Default is empty string.

Project element creation
AutoAssignTypeConverterToNewField When set to true, the Type Conversion Definitions in the project are searched for a matching Type Converter with the new field and if found the Type Converter is assigned to the new field. Default is false. A new project will inherit this value.

Schema Element Retrieval specific settings
ManualSelectSProcsFromSchema When set to true, the user has to select which stored procedure definitions to read from the schemas in the active catalog(s). When set to false, all stored procedure definitions are read (unless stored procedure retrieval is disabled in the create project/refresh catalog(s) dialogs) and added to the catalog in the project. For large schemas with a lot of stored procedures which are probably never used in the LLBLGen Pro project, set this option to true. Default is false. RetrieveDBCustomProperties When set to true, all custom property data of the database objects a new project object is based on will be copied to the object's Custom Properties. Default is false. Set this to false if you've created your SqlServer database from MS Access or used a designer application which stored program specific data in the 'extended properties' of SqlServer. 'False' will prevent this data from entering your project. The value of this setting is used to set the checkbox for Custom Properties retrieval in the New project creation window. AutoDetermineSProcType SqlServer specific. When set to true, the SqlServer or Sybase ASE driver will determine the type of a stored procedure (Retrieval or Action) automatically. When set to false, the user has to select the type for each stored procedure. Automatic retrieval can take a long time plus it can have unwanted side effects like execution of non-DML SQL statements (like sending an email). SELECT/INSERT/UPDATE/DELETE and DDL statements are never executed during type determination. When set to true, ManualSelectSProcsFromSchema is ignored and always considered false. Default is true.

Note : Please see the warning about automatic stored procedure meta data retrieval on SqlServer to make sure you're using the right setting for your particular situation. It is set to true by default for convenience, though developers should pay attention to the fact what this setting does and if they're unsure, set it to false.

Task performers specific settings
CleanUpVsNetProjects When set to true, the VS.NET project file task performer will first remove all file references for files from an existing VS.NET project file, before adding the files generated. For

Page 83

VS.NET 2005 projects, it will remove all files generated by LLBLGen Pro, as these are marked with a Generator tag.

Note : Use with care as an old VS.NET 2003 project may contain references to files which aren't marked with LLBLGen Pro specific XML elements/attributes, so all file references are removed, which thus also means that references to files you've added yourself to the project are removed as well, which forces you to re-add the references to those files to the VS.NET project manually. Projects created for VS.NET 2005 and with LLBLGen Pro v2.0 or higher do have the files marked as LLBLGen Pro generated and it's safe to use this setting with these projects. Default is false. ConvertNulledReferenceTypesToDefaultValue When set to false (default is true), an entity field which has a reference type (e.g. string) will return null / Nothing if the value for the field is null / Nothing. When set to true (default), the default value belonging to that reference type is returned. The default value for a type is produced by the generated class TypeDefaultValue. A new project will inherit this value. FailCodeGenerationOnWriteError When set to true (default is false), the code generator engines of LLBLGen Pro will throw a GeneratorAbortException to terminate the code generation cycle if a write error occurs. A write error is generated when the target file exists and is readonly and failwhenexistent is false for the executing task. GenerateNullableFieldsAsNullableTypes .NET 2.0 specific. This preference controls the setting of a new entity field if that field, when nullable, should be generated as a field of type Nullable<T> / Nullable(Of T), instead of a field of a normal .NET type. Default is true. HideManyOneToOneRelatedEntityPropertiesFromDataBinding When set to true (default), LLBLGen Pro will generate Browsable(false) attributes on properties representing fields mapped onto m:1 or 1:1 relations, making the properties invisible for databinding. Setting this setting to false will make them show up as columns in some controls. A new project will inherit this value. LazyLoadingWithoutResultReturnsNew SelfServicing specific: when set to true (default), lazy loading functionality which fetches a m:1 or 1:1 related entity will return a new entity when the related entity to fetch is not found. When set to false, it will return null (Nothing). ShowTaskPerformerReport When set to true (default), LLBLGen Pro will show the task performer report in a modal dialog. The report can be copied to the clipboard, with formatting, if desired. Default is true. StoreTimeLastGeneratedIntoProject When set to true (default: false), the time the last generation cycle for a project took place is stored inside the project. This will make the project 'changed' after every generation cycle, which could influence sourcecontrol behavior if you store the .lgp file in a sourcecontrol system. A new project will inherit this value. TdlEmitTimeDateInOutputFiles When set to true (default), the TDL code emitter will emit the time and date for <[Time]gt; statement, otherwise nothing will be emitted for that statement. Set to false only if you need files to stay the same if they're not effectively changed, for example for VSS. A new project will inherit this value. Default is true. By clicking 'Save' the user preferences are saved into the file 'preferences25.xml' in the user's own Application specific folder at: C:\Documents and Settings\current user \Application Data\LLBLGen Pro

Project Properties
A project contains its own set of properties, some receive their initial value from the user preferences, others from the New Project wizard information. To access the project properties, click with the right mouse button on the project node in the Project Explorer or select Project -> Properties... from the menu. The dialog consists of three tabs: one for the project properties, one for the project's custom properties and one for the abbreviations for the project. Custom properties are name-value pairs (name and value are both strings) which can be used in templates to generate project specific information. Abbreviations are namevalue pairs which are used to resolve abbreviations in names (see below for more details on this). Below is an example of how the dialog looks like. The values shown aren't the default values per property. For the default value for a property setting, please see the specific property description below.

Page 84

Project properties dialog, normal properties tab

Page 85

Project properties dialog, custom properties tab

Page 86

Project properties dialog, abbreviations tab

General settings
Filename . The filename of the project. This filename is used when you save the project.

Catalog Refresher settings
These settings are inherited from the preferences and could be used to override a preference setting with the value specified in the project: the value 'Default' specifies that the value of the same setting in the Preferences should be used, the values True and False override the setting in the Preferences with resp. True and False.

Miscellaneous settings
AdditionalTaskPerformerFolder If specified, LLBLGen Pro will look for taskperformer assemblies in this folder as well, besides the default taskperformer folder. Specify the folder with full path or if you want to make the path relative to the project location, specify the path as a relative path. A relative path starts with '.\' or with '..\' (without the quotes). If you don't want to use an additional folder, leave it empty. AdditionalTasksFolder If specified, LLBLGen Pro will look for *.tasks/*.platform/*.presets files in this folder as well, besides the default Tasks folder. Specify the folder with full path or if you want to make the path relative to the project location, specify the path as a relative path. A relative path starts with '.\' or with '..\' (without the quotes). If you don't want to use an additional folder, leave it empty. AdditionalTemplatesFolder If specified, LLBLGen Pro will look for templateGroups.config/*.language/*.templatebindings files in this folder as well, besides the default Templates folder and the additional templates folder defined in the LLBLGen Pro config file. Specify the folder with full path or if you want to make the path relative to the project location, specify the path as a relative path. A relative path starts with '.\' or with '..\' (without the quotes). If you don't want to use an additional folder, leave it empty. AdditionalTypeConverterFolder If specified, LLBLGen Pro will look for assemblies with TypeConverter

Page 87

classes in this folder as well, besides the default TypeConverterRootFolder folder defined in the LLBLGen Pro config file. Specify the folder with full path or if you want to make the path relative to the project location, specify the path as a relative path. If you don't want to use an additional TypeConverter folder, leave it empty.

Note : Be aware that if you have type converters located in a folder specified in AdditionalTypeConverterFolder, and those type converters refer to types also implemented in the type converter dlls, they can't be loaded by the .NET CLR when you load a project. If that's the case, you'll get an exception when you try to load the project. To avoid these load errors, either place the type converters in the LLBLGen Pro's TypeConverters folder in the LLBLGen Pro installation folder, or use an assembly load resolving file. ProjectCreator The name of the creator of the project. ProjectName The name of the project. RetrieveDBCustomProperties When set to true, all custom property data of the database objects a new project object is based on will be copied to the object's Custom Properties. See also user preferences for more details. A new project inherits this value from the preferences.

Name construction specific settings
EnforcePascalCasingAlways When set to true, the setting MakeElementNamePascalCasing is always enforced. When set to false, the setting MakeElementNamePascalCasing is enforced only when names for new elements are created. Default is true. A new project inherits this value from the preferences. EntityFieldNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the entity field's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. A new project inherits this value from the preferences. EntityNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the entity's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. Example: prefix strip pattern tbl_ and suffix strip pattern _dev will form the strip pattern {tbl_}{_dev}. A new project inherits this value from the preferences. FieldMappedOnManyToManyPattern The pattern which is used to construct the names for Fields mapped on m:n relations. Pattern elements can be: {$StartEntityName} for the name of the start entity, {$EndEntityName} for the name of the end entity, {$IntermediateEntityName} for the name of the intermediate entity, $P or $S suffix to entity name macros to pluralize or singularize them (example: {$EndEntityName$P}), {$StartEntityFieldNames} for all the names of the fields of the relation in start entity, {$EndEntityFieldNames} for all the names of the fields of the relation in the end entity and any literal text. An element can be mentioned more than once. FieldMappedOnOneManyToOnePattern The pattern which is used to construct the names for Fields mapped on m:1 or 1:1 relations. Pattern elements can be: {$StartEntityName} for the name of the start entity, {$EndEntityName} for the name of the end entity, $P or $S suffix to entity name macros to pluralize or singularize them (example: {$EndEntityName$P}), {$SingularEndEntityName} for the singular form of the name of the end entity, {$StartEntityFieldNames} for all the names of the fields of the relation in start entity, {$EndEntityFieldNames} for all the names of the fields of the relation in the end entity and any literal text. An element can be mentioned more than once. FieldMappedOnOneToManyPattern The pattern which is used to construct the names for Fields mapped on 1:n relations. Pattern elements can be: {$StartEntityName} for the name of the start entity, {$EndEntityName} for the name of the end entity, $P or $S suffix to entity name macros to pluralize or singularize them (example: {$EndEntityName$P}), {$StartEntityFieldNames} for all the names of the fields of the relation in start entity, {$EndEntityFieldNames} for all the names of the fields of the relation in the end entity and any literal text. An element can be mentioned more than once. FieldMappedOntoRelatedFieldPattern The pattern which is used to construct the names for Fields mapped onto a related field. Pattern elements can be: {$RelatedEntityName} for the name of the related entity which contains the mapped related field and {$RelatedFieldName} for the name of the field in the related entity which is mapped by the field mapped onto a related field. You can also specify any literal text. An element can be mentioned more than once. MakeElementNamePascalCasing When set to true, all names of new entities, entity fields, typed views etc. will be properly PasCal cased. This means that each character in the name is lowercased, except the first character after each word boundary ('_' or ' ') and the first character. All spaces are

Page 88

always removed. When set to false, the name is left untouched, except for the first character, which will always be UpperCase. A new project inherits this value from the preferences. RemoveUnderscoresFromElementName When set to true, all single underscores in names of new entities, entity fields, typed views etc. will be removed. When set to false, the name is left untouched. A new project inherits this value from the preferences. StoredProcNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the stored procedure's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. Example: prefix strip patterns pr_ and sp_ and suffix strip pattern _dev will form the strip pattern {pr_, sp_}{_dev}. A new project inherits this value from the preferences. TypedViewFieldNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the typed view field's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. A new project inherits this value from the preferences. TypedViewNameStripPattern The pattern which contains two sections, enclosed in {}, one for the prefixes and one for the suffixes. Add prefixes and suffixes to strip off by separating them by a comma. The first match is stripped. If the typed view's name is equal to a prefix/suffix strip pattern, nothing is stripped off. Stripping is case insensitive. Example: prefix strip pattern vw_ and suffix strip pattern _dev will form the strip pattern {vw_}{_dev}. A new project inherits this value from the preferences.

Output specific settings
ConnectionStringKeyName . The name for the key in the app.config file with which the connection string is stored. EncodingToUse . The encoding to use for output files generated for this project. A new project inherits this value from the preferences. RootNameSpace . The root namespace value which is initially chosen in the generator configuration window.

Project element creation
AutoAssignTypeConverterToNewField When set to true, the Type Conversion Definitions in the project are searched for a matching Type Converter with the new field and if found the Type Converter is assigned to the new field. Default is false.

Task performers specific settings
AdapterDbSpecificSubFolderName Adapter specific. This value is the name for the subfolder in the destination folder in which the generated database specific files are placed. AdapterDbGenericSubFolderName Adapter specific. This value is the name for the subfolder in the destination folder in which the generated database generic files are placed. AdapterDbSpecificNamespaceSuffix Adapter specific. This value is the suffix appended to the root namespace for the database specific generated code. Do not prefix this name with a '.', as that's added automatically by the code generator. AdapterDbSpecificProjectFileSuffix Adapter specific. This value is the suffix appended to the VS.NET project filename for the database specific generated code. AdapterDbGenericProjectFileSuffix Adapter specific. This value is the suffix appended to the VS.NET project filename for the database generic generated code. CleanUpVsNetProjects When set to true, the VS.NET project file task performer will first remove all file references for files from an existing VS.NET project file, before adding the files generated. For VS.NET 2005 projects, it will remove all files generated by LLBLGen Pro, as these are marked with a Generator tag.

Page 89

Note : Use with care as an old VS.NET project may contain references to files which aren't marked with LLBLGen Pro specific XML elements/attributes, so all file references are removed, which thus also means that references to files you've added yourself to the project are removed as well, which forces you to re-add the references to those files to the VS.NET project manually. Projects created for VS.NET 2005 and with LLBLGen Pro v2.0 or higher do have the files marked as LLBLGen Pro generated and it's safe to use this setting with these projects. Default is false. A new project inherits this value from the preferences. ConvertNulledReferenceTypesToDefaultValue When set to false (default is true), an entity field which has a reference type (e.g. string) will return null / Nothing if the value for the field is null / Nothing. When set to true (default), the default value belonging to that reference type is returned. The default value for a type is produced by the generated class TypeDefaultValue." FailCodeGenerationOnWriteError When set to true (default is false), the code generator engines of LLBLGen Pro will throw a GeneratorAbortException to terminate the code generation cycle if a write error occurs. A write error is generated when the target file exists and is readonly and failwhenexistent is false for the executing task. A new project inherits this value from the preferences. GenerateNullableFieldsAsNullableTypes .NET 2.0 specific. This preference controls the setting of a new entity field if that field, when nullable, should be generated as a field of type Nullable<T> / Nullable(Of T), instead of a field of a normal .NET type. Default is true. A new project inherits this value from the preferences. HideManyOneToOneRelatedEntityPropertiesFromDataBinding When set to true (default), LLBLGen Pro will generate Browsable(false) attributes on properties representing fields mapped onto m:1 or 1:1 relations, making the properties invisible for databinding. Setting this setting to false will make them show up as columns in some controls. A new project inherits this setting from the preferences. LazyLoadingWithoutResultReturnsNew SelfServicing specific: when set to true (default), lazy loading functionality which fetches a m:1 or 1:1 related entity will return a new entity when the related entity to fetch is not found. When set to false, it will return null (Nothing). A new project inherits this value from the preferences. StoreTimeLastGeneratedIntoProject When set to true (default: false), the time the last generation cycle for a project took place is stored inside the project. This will make the project 'changed' after every generation cycle, which could influence sourcecontrol behavior if you store the .lgp file in a sourcecontrol system. TdlEmitTimeDateInOutputFiles When set to true (default), the TDL code emitter will emit the time and date for <[Time]gt; statement, otherwise nothing will be emitted for that statement. Set to false only if you need files to stay the same if they're not effectively changed, for example for VSS or other sourcecontrol system.

Abbreviation conversions
LLBLGen Pro supports the automatic conversion of abbreviation fragments in names into full name fragments using abbreviation-full word pairs defined per project. You can specify these abbreviation-full word pairs in the 3rd tab of the Project Properties. For example a field called 'Addr' or fields with 'Addr' in the name can be updated with 'Addr' being replaced with 'Address' so CustAddr will then become CustAddress, and if 'Cust' is also added to the abbreviations to become Customer, it will convert CustAddr into CustomerAddress. Abbreviations are stored inside the project file so everyone using the same .lgp file has the same abbreviations. They're not regular expressions, but simple Abbreviation - FullWord pairs. They're matched with fragments found during name processing. Fragments are elements separated by nonusable characters, space, underscore, a full word, or where an Uppercase/Lowercase change appears. So the string AaBb_CCC Ddd has 4 fragments: Aa, Bb, CCC and Ddd. The following rules apply: Abbreviations are added per project, in the project properties dialog, and should be inserted right after the project has been created and before the entities are added to the project. They're used when names have to be created for entities, typedviews and stored procedures, and fields for entities, typedviews and for parameters for stored procedure calls. The abbreviations are evaluated during name processing and before a FieldMappedOn*Pattern has been applied and also before casing rules have been applied.

Page 90

All abbreviations are case insensitive. Abbreviations can be used as well to specify specific casing. For example the abbreviation - full word pair: ID - ID will make sure that all ID fragments found won't be cased to Id, but will kept as ID. It's also possible to export/import abbreviations to/from textfiles. These textfiles should have at each line the abbreviation and the full word separated by a TAB so example: addrTAB AddressCRLF You can import/export the abbreviations on the Abbreviations tab in the Project Properties.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 91

Designer - Creating a project
Preface
LLBLGen Pro uses database drivers to connect to database servers and retrieve schema information. Per database driver it has specific database connection information screens, which are part of the Create New Project window. In this example, we use the SqlServer driver and the SqlServer connection information screen, so when you want to use for example the Oracle 8i/9i driver, the controls may vary, however each connection information screen has tooltips and the layout itself should be straight forward. For any application you wish to use LLBLGen Pro for, you start by creating a new project. You do this by either clicking 'File --> New Project' in the menu, or by pressing Ctrl+N on your keyboard. This will pop up the following screen:

Project information
In the 'Project information' part of the screen you fill in the name of your project and the name of the

Page 92

creator . The name of the project will also be used for the filename of the project file. The location where this file will be stored can be selected by clicking the browse button next to the location text box.

Database information
The Database information part of the 'Create a project'-screen will not be editable until after you fill in the Project information data. You then select the database driver to use. This example assumes picking the SqlServer 7/2000/2005 driver, which will show the SqlServer connection information screen inside the 'Database connection information' area. It asks for the following information: Type in the server name on which the database resides. This can be an IP number as well. Use the radio buttons to choose whether you need to log on to the server using Windows Authentication , or to log on using Database Specific Authentication . In the latter situation, you'll need to fill in your User id and password. Press 'connect' . Your system will log on to the server and search for catalogs (schema sets) stored there. On SqlServer you can now select one or more catalogs you wish to target. On Oracle you can select one or more schemas you wish to target. On other supported databases, just one database (file)/schema is supported, except on DB2, where one catalog with multiple schemas is supported. After you've chosen the catalog(s) to use for this project, you should decide which elements you want to retrieve from the schemas of these catalogs one which the project will be based. Initially tables and views are checked and Stored Procedures and Custom Properties are not checked. Custom Properties is checked based on the value for the user preference RetrieveDBCustomProperties . It is recommended that if you are not planning to use the custom property values defined with the various database elements (like tables and views) or you're not planning to use the stored procedures in the schemas, you should leave these elements unchecked so project creation is faster and your project file will be smaller. When you've selected the elements you want to retrieve from the catalog(s), you click Create and all metadata of all the schema(s) found in the selected catalogs is read. This information is embedded in the new project and the connection with the database is closed: because all schema information is read, you do not need a connection with the database anymore, until an element in the schema is changed and you have to refresh the catalog in the project. See for details about refreshing the catalog Refreshing the catalog schemas .

Setting stored procedure information
If you've decided to retrieve stored procedure meta-data as well, and you've set the user preference ManualSelectSProcsFromSchema to true, (and if you're using SqlServer or Sybase ASE: AutoDetermineSProcType to false), per catalog (or on Oracle: per schema), the following screen will pop up, allowing you to select the stored procedures you'd like to include. On SqlServer, you'll also be able to set the number of resultsets for each stored procedure. By using filtering you can select a lot of procedures at once and set their number of resultsets with one click. Procedures which return 1 resultset will be generated as procedures which return a DataTable. Procedures which return 2 or more resultsets will be generated as procedures which return a DataSet. Only the meta-data of the checked stored procedures is retrieved, which can speed up project creation and limit project file size, if you need just a few procedures from a large set of procedures in the catalog(s)/schema(s) you have to target.

Page 93

When the project is created without errors, the Create New Project window is removed and the project is visualized in the Project Explorer . The project is not saved yet nor are there any elements added to the project. With a new project just created it is a good time to open the project properties and set them to your needs. Especially the prefixes to strip off of table names, view names to construct entities and typed view names can be helpful. These values are inherited from the user preferences , so if you've set them already there and they don't need to be changed for this new project, you can move on directly to the subject of adding elements, like entities, to the project.

SqlServer specific warning
(Only applicable if you've set the user preference SqlServerAutoDetermineSProcType to true) LLBLGen Pro needs to know how many resultsets a stored procedure returns: 0, 1 or more than 1. SqlServer doesn't store this information, though it provides a mechanism to determine the number of resultsets of a stored procedure: through using SET FMTONLY ON before calling a procedure. When SET FMTONLY ON is called before the actual proc call, no data nor schema is affected by the procedure. This technique is for example used by SqlDataAdapter.FillSchema. LLBLGen Pro's SqlServer driver uses that routine to determine the number of resultsets per stored procedure. While this is safe, datawise, it can be that a stored procedure executes non-DML actions, like sending an email or starting a SqlServer job. You should be aware of that before you retrieve the stored procedures from the catalog you connect to. If you are unsure what the effects will be, either set the user preference SqlServerAutoDetermineSProcType to false or use the following trick: extract the DDL schema from SqlServer for your catalog using the Enterprise Manager and create with it a different catalog in your SqlServer instance and remove the stored procedures you're not going to use in the LLBLGen Pro project from that new catalog, and use that catalog to create your project with.

Assembly load resolving file
During the process when an LLBLGen Pro project file is loading, it can be that some types are found in the project data which can't be resolved as the assembly these types refer to as their container aren't found by fusion (.NET's assembly loaded technology). The designer then typically pops up an OpenFile dialog to let

Page 94

the user browse to the assembly which couldn't be loaded. This is necessary as the project data might contain the location of the assembly (e.g. the project property AdditionalTypeConverterFolder ) but this is not yet available. As this can be cumbersome, a more automatic solution is available. This is the assembly load resolving file, which is a file which contains, per assembly name, the real path of the assembly, so when a type load error occurs the assembly can be retrieved from that location. The file has to be created manually by the developer and has to have the name: projectname .lgp.assemblylocations. (casing isn't important). So when your project is stored in Northwind.lgp, the assembly load resolving file should be: Northwind.lgp.assemblylocations. This file is only required if you experience assembly load errors. In the assembly load resolving file the assembly names and exact locations have to be specified in a simple XML format: <assemblyLocations> <assemblyLocation name="..." filename=".." /> ... </assemblyLocations> Where name is the full name of the assembly. Filename is path + filename where the assembly can be found. If the path starts with '..\' or '.\' it is considered relative to the project file location, otherwise it's considered a full path. If an assembly couldn't be found and it's also not found in the list of assemblyLocations, the dialog is still used to resolve a missing assembly.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 95

Designer - Adding and editing entities
Preface
As mentioned in Concepts - Entities, lists and views, an Entity maps onto a table or view definition and has the ability to inherit mapping information from a supertype, and is one of the building blocks of the code you're designing using LLBLGen Pro. Before you add any entities, make sure the project properties have the right strip patterns defined for entity names and field names. Those strip patterns are used to strip off prefixes you do not want in your entity names and field names. For example, say each table is prefixed with "Tbl_". When you set the strip pattern for entity names to "Tbl_", and you retrieve the entity names from the catalog schemas, LLBLGen Pro will automatically strip off the "Tbl_" prefix of all table names, and use the rest of the name for the entity name (Spaces are stripped too, first character is always capitalized).

Adding entities
To add entities to your project you take the following steps: In the Project Explorer, right-click 'Entities' and choose 'Add New Entities Mapped On Tables From Catalog(s)'. You can also select 'Entities' and press Ctrl + T, or select 'Add New Entities Mapped On Tables From Catalog(s)' in the Project menu of the designer. This will allow you to add entities based on a table. If you want to add entities based on views, select 'Add New Entities Mapped On Views From Catalog(s)' instead (or press Ctrl + V) A screen will pop up, showing all database tables (or views) in the catalog. As you can map as many entities onto a single target as you'd like, all tables (or views) are always listed. LLBLGen Pro automatically suggests names for entities for these targets, so if you already have an entity mapped on a given table/view, it will make sure the name suggested isn't clashing with the names already available in the project. By default, no possible entity is checked to be added to the project. You can check / uncheck entities by either using your mouse, or by stepping through them using your keyboard arrows. The spacebar will toggle the checkbox belonging to the selected entity on and off. You can also select one ore more rows in the grid and click 'Toggle checkboxes of selected rows', which will toggle the checkboxes of the rows you selected. Each row that is checked will be added to the project as an entity. Keep in mind that all names in an LLBLGen Pro project have their first character capitalized, need to be CLSCompliant and should be usable as a C# identifier. This way each name can be used in the code safely. To change a pre-selected name, just select the cell in the grid and change the value. When you're done checking entity checkboxes, click 'Add to project'.

Note: it's not recommended to map an entity onto a view which returns duplicate rows.

Page 96

All entities you've checked in the entity selection screen will be added to the project and will be visualized in the Project Explorer. If you click open the 'Entities' node, you'll see these new entities added, their fields mapped on table fields, relations the entity has with other entities, and fields mapped on these relations. As each entity is new, you'll see that all the entities have the front and back color you've set for changed items in the user preferences. Each relation can be expanded to see the further details of the relation: which entity fields form the relation and, if the relation is a m:n relation (many to many), the entity which binds the m:n related entities together. As you will notice, LLBLGen Pro can determine every relation there is between entities if the proper foreign key constraints are in place in the catalog schemas, and if your tables participating in those foreign keys have properly defined primary keys. Please click on the thumbnail image at the right to view the project explorer window after the Northwind entities have been added to the project and the project has been saved. A project without changes has a yellow icon with a blue '2' , a project with unsaved changes has a yellow icon with a red '2'.

Editing an entity
To edit an entity, you either double click it in the project explorer (if you've set the user preference ProjectExplorerOpenElementOnDoubleClick to true) or you click with the right mouse button on an entity, and select 'Edit / Properties' or press Ctrl + E. An entity editor opens in the tabbed area, showing the complete entity and all editable elements on six possible sub tabs: Inheritance Info, Fields on database fields, Fields on relations, Relations, Fields on related fields and Custom properties. The sub tabs can be navigated at the bottom of the Entity editor tab. Besides these sub tabs, the editor allows you to alter the name of the entity and view the target table in the catalog explorer.

Inheritance info sub tab
The first sub tab is Inheritance info. This is the sub tab which contains a full hierarchy view of the entity in the inheritance hierarchy it is in, as well as additional info about the entity being abstract, the discriminator value (if applicable), and the hierarchy type the entity is in. If the entity is not in an inheritance hierarchy, this tab isn't visible. A screenshot of this sub tab of the Employee entity and its hierarchy is shown below.

Page 97

Inheritance info sub tab of the entity editor

The entity edited in the particular entity editor has a yellow horizontal band, as shown in the screenshot with the Employee entity. Other entities in the hierarchy have a grey band. If an entity is abstract, its border is dotted, as shown with the Employee entity. You can select multiple entities in the viewer, drag them around, open the fields listing for them and make the viewer reset the layout by pressing Cntrl-L. The graph viewer has a context menu as well, which you can activate by right-clicking on the background of the viewer. You can also activate some entity-related actions by right-clicking an entity in the graph and selecting an option from the context menu.

Page 98

At the bottom, below the graph viewer, you'll be able to set the zoom level. This can be handy when the hierarchy is large and doesn't fit on the screen. A checkbox lets you set the entity's Abstract state. You can only set an entity's abstract state (i.e. making the entity abstract) if the entity has a subtype, and it doesn't have a non-abstract supertype. If the entity is in a hierarchy of type TargetPerEntityHierarchy, you'll see per entity in the graph their discriminator value, and you can set the discriminator value for the entity being edited in a textbox at the bottom of the Inheritance info sub tab. The value is validated as you leave the textbox, so improper values aren't possible. For more information about inheritance mapping, see Designer - Inheritance mapping.

Fields mapped on database fields sub tab
The second sub tab (or if the entity isn't in an inheritance hierarchy, the first tab) is Fields mapped on database fields. This is the sub tab which contains all entity fields and information about the database fields, their types, sizes etc. An entity's fields are shown in three possible tabs: Mapped entity fields, Unmapped entity fields, and if the entity is in a hierarchy, Inherited entity fields. Per tab, a screenshot is given and the controls are explained.

Fields mapped on database fields sub tab of th e entity editor: Mapped en tity fields

When you select a field in the list, you can change its name. This name, as with all names, can't contain spaces and will have its first character upper cased. Also, all field names have to be unique within an entity (including fields mapped on relations and fields mapped on related fields). On databases which support identity columns and/or sequences, you can select which fields are sequenced fields and you can select the sequence to use to feed values to a field's database column. Sometimes more than one sequence is

Page 99

available. If the database provided a sequence automatically (for example if it's an identity column), the particular sequence is preselected. Be aware that making a field not an identity field, by unchecking the Is Identity / Sequenced field checkbox while the field is an identity field could make your code fail at runtime due to insert problems.

Note: If an entity is in a hierarchy of type TargetPerEntity, a fields defined in the subtype which has the same name as a field in the supertype, overrides the field in the first supertype upwards. So if you have the hierarchy Employee Manager - BoardMember and all define a field 'Id', BoardMember.Id overrides Manager.Id, and Manager.Id overrides Employee.Id. This is also the case in the generated code. Be sure the casing is the same, otherwise the fields will be considered differently by LLBLGen Pro, as they also will be different for C#, however not for VB.NET

Note: You can press F4 when the focus is on the grid area of Mapped entity fields to move the focus to the Field properties and F3 to move the focus from the Field properties to the grid of Mapped entity fields. When you're editing an entity which is mapped onto a view or if the entity being edited is mapped onto a table without a primary key, you'll be able to select which fields are part of the primary key by selecting a checkbox ('Is part of primary key') on the field properties tab. In the case of the Order entity, this checkbox isn't enabled as the table the Order entity is mapped on has already a primary key defined. The checkbox isn't enabled either if the currently selected field is part of a relation. When you've selected a field which is not read-only by default (an identity field, a field of a database type which doesn't require a value (like SqlServer's Timestamp) or a computed column), you're able to make the field read-only by checking the Is readonly checkbox. When the entity is mapped onto a view, and the field currently selected isn't part of the Primary Key nor is it marked ReadOnly, you can set the Is nullable checkbox to set the fields nullability. When a field is Nullable and the .NET type of the field is ValueType, you can check the (.NET 2.0) Generate as Nullable type checkbox to make it get generated as a nullable type (e.g. Nullable<Int> / Nullable(Of Integer)), if you're using .NET 2.0. The initial value of this checkbox is controlled by the project property GenerateNullableFieldsAsNullableTypes. At the bottom of the tab, you see a combobox with known type converters which work on the .NET type of the selected field. In the screenshot above, no known type converters are available for the .NET type of the field: System.String. If there are type converters available, you can click the Set button to assign the type converter to the field. As soon as a field has a type converter set, its .NET type changes to the .NET type of the type converter, and its last column in the fields grid is set to the type converter's name. To reset the field to it's actual .NET type, you click the Reset to default button at the bottom, while having the field selected. To unmap a field, for example if you don't want a particular field or set of fields in a given entity, you can select one or more fields and click the Remove selected field(s) button. LLBLGen Pro will then remove the field or fields from the Mapped entity fields list, and also remove any relations with other entities which are based on that field. The target field is then added to the list of Unmapped entity fields, of which you'll see an example below. If you've selected fields which aren't removable (because they're FK fields and the relations can't be removed because a Typed List is based on it), these are ignored.

Page 100

Fields mapped on database fields sub tab of th e entity editor: Inherited entity fields

The above screenshot shows the inherited fields of the entity BoardMember, which is a subtype of Manager, which is a subtype of Employee. As you can see, the fields inherited from both Employee and Manager are listed. If the entity isn't in a hierarchy, this tab isn't visible.

Page 101

Fields mapped on database fields sub tab of th e entity editor: Ummapped entity fields

The above screenshot is from the Subtype entity FamilyCar, which is a subtype of CompanyCar. FamilyCar has one field of its target entity fields unmapped, IsCabrio, as a familycar typically isn't a cabrio. To map this field in FamilyCar, you select it and click Map selected target field(s). LLBLGen Pro will then map new fields onto the selected target fields and any relations which can be created after that are created as well. The entity editor furthermore contains an area to specify custom properties for the selected field. Custom properties are initially derived from Extended Properties (SqlServer) or description fields (Oracle, other databases) of the mapped table field. You can add new ones, delete the initially created custom properties or edit the custom properties. Custom properties are name-value pairs (both strings) which are generated into the code in a static hashtable. See for more information about using the custom properties available in the generated code: LLBLGen Pro generated code - Custom properties.

Fields mapped on relations sub tab
The third sub tab (or if the entity isn't in an inheritance hierarchy, the second tab) is Fields mapped on relations. An entity can have two possible tabs: Fields on relations and Inherited fields on relations. If the entity isn't in an inheritance hierarchy, the tab 'Inherited fields on relations' isn't visible. Per tab a screenshot is given and the controls are explained.

Page 102

Fields mapped on relations tab

This is the sub tab where you can edit the fields mapped on the relations of the entity. You can alter the name and/or toggle the hide flag of the field(s) selected. Hiding fields mapped on relations is sometimes necessary as LLBLGen Pro creates all relations for you it can construct from existing foreign key constraints. Among these relations are a lot of M:N relations you probably never use. All these relations have a field mapped onto them. It can clean up your generated code to hide the fields on these relations, if you want to use the relation but hide the field mapped on the relation. You can also hide the relation completely (on both sides of the relation, or on one side), you can do this on the Relations sub tab (see below). Hiding a relation also hides the field mapped on the relation. A hidden field mapped on a relation is visualized with a greyed out version of the regular icon: LLBLGen Pro also tries to construct a description of what kind of data the field will return. This can be handy to correctly name the field: is it a field which returns a collection of entities ('Orders' in the entity 'Customer') or a field which returns a single entity ('Customer' in the entity 'Order'). For convenience, the complete relation is visualized so you can easily determine what the relation represents and what thus the field mapped on the relation will represent for the current entity ('Orders' or 'Customer' or 'Employee' for example).

Page 103

Inherited fields mapped on relations tab

This tab is for information purposes, as no actions can be taken on the provided information. If you want to rename an inherited field, you have to open the editor of the entity in which the particular field is defined to rename it. In the relations in the above screenshot, Employee contains the field WorksForDepartment and the relation Employee - Department and Manager contains the relation Manager - Department with the mapped field ManagesDepartment.

Important: Fields mapped onto relations aren't overriding fields with the same name inherited from the supertype. It's therefore not recommended to give a field mapped onto a relation the same name as an inherited field mapped onto a relation.

Relations sub tab
The forth sub tab (or if the entity isn't in an inheritance hierarchy, the third tab) contains a detailed listing of

Page 104

all the relations located in the entity and all inherited relations (if applicable). Inherited relations are displayed in grey with a suffix like '{containing entity name}'. You can click them open to fully visualize the relation. When you select one or more relations (just the top relations are selectable, not the relations inside an m:n relation neither the inherited relations), the 'Actions on selected relations' button is enabled and you can then select an action to execute on the selected relations. An example of this is shown in the screenshot of this tab

Relations sub tab of th e entity editor

Toggling the hide flag means that you hide/unhide the relation. When you hide a relation, its field mapped onto it is also hidden. Hidden objects will not show up in the generated code, like they're not there. You can hide a relation on both sides (thus it is also hidden in the related entity) or just on the side of the current entity. Hiding an 1:n or m:1 relation can also mean that you hide one or more m:n relations which are build on the hidden 1:n/m:1 relation: an m:n relation can't contain a hidden sub relation. A hidden relation is visualized with the greyed out version of the regular icon: (regular relation) or (custom relation)

Page 105

Important: With SelfServicing you have to hide relations on both sides, because SelfServicing code uses logic in the related entity to retrieve the objects.For example Customer.Orders asks the OrderCollection to retrieve orders based on the passed in Customer. When you hide the relation Order - Customer but not Customer - Order, your code will not compile. To toggle the hide flag for multiple relations at once, you can hold cntrl or shift while selecting relations and either select the toggle hide flag action from the action button or by right-clicking the relations and selecting the action to perform from the context menu. Removing an existing relation can only be executed on custom relations, i.e. relations you create yourself in the LLBLGen Pro designer. Custom relations have a blue dot in their icon. The designer will first determine if it is possible to remove the selected relations, and will filter out relations present in Typed Lists. It will enlist the relations it will remove, including the m:n relations containing one of the selected relations. If you agree with the enlisted set, LLBLGen Pro will remove the relations and update the project in the LLBLGen Pro designer. From the Relations sub tab you can create new custom relations using either the 'Add new custom 1:1/1:n/m:1 relation' or 'Add new custom m:n relation' buttons. These buttons open relation designer screens which are discussed in detail in LLBLGen Pro designer, Adding custom relations.

Fields mapped on related fields sub tab
The fifth sub tab (or if the entity isn't in an inheritance hierarchy, the forth tab) contains the fields mapped on related fields editor for the entity. This tab is not always enabled: it is only enabled if the entity has one or more 1:1 or m:1 relations, defined in the entity or inherited from the entity's supertype. A field mapped onto a related field is a field which represents a field in a related entity. The fields mapped onto related fields are shown in two possible tabs: 'Fields on related fields' and 'Inherited fields on related fields'. If an entity isn't in an inheritance hierarchy, the tab 'Inherited fields on related fields' isn't visible. Per tab, a screenshot is given and the controls are explained.

Page 106

Fields mapped on related fields tab

For example in the BoardMember entity we can add a field called 'BrandOfCompanyCar' which represents the CompanyCar.Brand field, of the related CompanyCar entity of that particular BoardMember. You can specify if the field is read-only or not. The default value for this setting is determined from the user preference FieldsOnRelatedFieldAreReadOnly. You can define fields mapped on related fields from entities related by relations in the current entity but also via relations inherited from the supertype. Though it's generally more useful to map a field in the entity which also contains the relation, like the field WorksForDepartmentName, which is defined in Employee, and which is mapped onto the related field Department.Name. The name that's initially chosen for the field is created using the pattern that's specified as FieldMappedOnRelatedFieldPattern, in the project properties. In the following screenshot of the 'Inherited fields on related fields' tab of BoardMember, you'll see this field present, as inherited field.

Page 107

Inherited fields mapped on related fields tab

Fields mapped on related fields are generated into the code in such a way that they return the default value for their type if the related entity isn't available. There are other ways to add fields mapped on related fields, for example by adding your own code to the generated code in safe user code regions. Please see Adding your own code to the generated classes for details.

Note: if you're using SelfServicing and you're binding a collection of entities to a grid and you've added one or more fields mapped on related fields, it will trigger lazy loading for each individual entity in your collection. It is therefore recommended to use prefetch paths in this particular scenario, to prefetch the related entities using a more efficient query scenario. For more information about prefetch paths in selfservicing, please see Selfservicing - Prefetch Paths.

Page 108

Important: Fields mapped onto related fields aren't overriding fields with the same name inherited from the supertype. It's therefore not recommended to give a field mapped onto a related field the same name as an inherited field mapped onto a related field.

Code generation options sub tab
The fifth sub tab contains two different areas: the custom property editor for the entity and a section for output specific settings. Custom properties As with other custom properties (except the project and typed list custom properties), these are initially derived from the mapped table's Extended properties (SqlServer) or description fields (Oracle, other databases). You can add new ones, delete the initially created custom properties or edit the custom properties. Custom properties are name-value pairs (both strings) which are generated into the code in a static hashtable. See for more information about using the custom properties available in the generated code: LLBLGen Pro generated code - Custom properties. A screenshot of the entity's custom properties sub tab is shown below

Page 109

Cu stom properties sub tab of the entity editor

Output specific settings The Output specific settings offer the user to specify additional interfaces and namespaces which have to be generated into the code for that particular entity. You can use macro's in the interface and namespace names: {$RootNamespace}. This macro resolves to the root namespace specified when the code is generated. {$ElementName}. This macro resolves to the element name, for example the entity name. This name is without suffixes like 'Entity'. These macro's are evaluated at generation time. To be able to specify the same interface / namespace on a lot of entities at once, you can run the plug-in Add / Remove Additional Interface / Namespace plug-in which is shipped with LLBLGen Pro. As the name suggests, you can also remove set interfaces / namespaces from selected entities. A screenshot of the entity's output specific settings sub tab is shown below

Ou tput specific settin gs s ub tab of the entity editor

All changes you'll make in the editor will be reflected in the project immediately. Also, everything is kept in sync, so when you change an entity's name, the name is changed everywhere in the LLBLGen Pro designer directly. As you will notice, when you change something, the project's icon in the project explorer will change to the yellow/red dot, indicating there have been changes. In addition, in the status bar the remark

Page 110

that the project has unsaved changes will appear.

Viewing entity information
The entities currently in your project are shown in the project explorer, under the Entities node. This is of course great for a first glance, but if you want a detailed overview of every entity's detailed information, you need something else. You can of course use the entity editor, but for each entity, it gets a bit cumbersome. To show a full list of the entities in your project, which targets they're mapped on, if they're abstract or not, if they're in a hierarchy or not, if they've a discriminator value set etc., you right-click the Entities node in the project explorer and select View Entity List, or press Cntrl-Shift-L. You can also open the entity list overview by selecting View Entity List from the Project menu in the main designer menu. To sort on a column, simply click the column header. To sort on multiple columns, as with all the grids in LLBLGen Pro, hold Shift while clicking on multiple columns to sort on more than one column.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 111

Designer - Adding custom relations
Preface
LLBLGen Pro creates, from the found Foreign Key constraints and the Unique Constraints defined, per entity a set of relations. Often this is more than enough, however it can be that the database doesn't have any foreign key constraints defined. When that's the case, and legacy databases often lack foreign key constraints, a lot of LLBLGen Pro's functionality can't be used, simply because there are no relations defined. Starting with v1.0.2004.1, LLBLGen Pro contains functionality to allow you to design relations for entities, which are used as if they were relations created from Foreign Key constraints. Custom relations are visualized with the normal relation icon with a blue dot: You can create these custom relations by using or the buttons on the Relations sub tab in the entity editor or by clicking with your right mouse button on the Relations child node of an entity in the Project Explorer. As there are two different types of relations: 1:1/1:n/m:1 vs. m:n, there are two different designers for these relations, one for each type. These designers are discussed below. Custom relations are required if you want to add relations between entities which are mapped onto tables in different catalogs (SqlServer).

Adding 1:1/1:n/m:1 relations
This is the designer you'll need the most if you plan to create your relations by hand using the designers. M:n relations are based on a 1:n and an m:1 relation so it is wise to start with this designer first. The logic behind the designer determines what the type of the relation is, so you can't make mistakes in that. It also enlists only related fields of the same type as the current field to relate, which also limits errors. A typical example is shown below.

Page 112

Cu stom 1:1/1:n/m:1 relation editor

You first specify on which side the current entity is. In the example the AdminRightsPerAdministrator is a typical foreign key only entity, it relates two entities (AdminRight and Administrator). It has two relations, one with AdminRight and one with Administrator. In the screenshot, you see the relation with AdminRight in progress. We select the AdminRightId field in AdminRightsPerAdministrator, as that's the FK field for this relation. The dialog will automatically pre-select fields which have the same name, to speed up relation creation. The designer furthermore asks us to define the name for the field mapped on this relation, which is also pre-constructed for you using the related entity name, to speed-up relation creation. If we want new possible m:n relations automatically added as well and if we also want this relation to add to the related entity (AdminRight in this case). This last option is enabled by default if the user preference HideManyToManyRelationsOnCreation under the 'Catalog Refresher' section is set to false. On the right, a checkbox is checked to keep the editor open after you've clicked 'Create'. This is to save time when you're adding a lot of relations. By clicking 'Create', the defined relation is first checked if it doesn't exist already. If it doesn't, the name for the field mapped onto this relation is also verified. If everything is OK, it is created, eventually new m:n relations and the opposite relation in the related entity is also created. The GUI is updated automatically with this new relation.

Note: Discriminator fields shouldn't be foreign key fields. The reason for this is that by setting a different related entity, the type of the entity could change because the discriminator field value changes, which is impossible as that would mean the entity object in memory should also change type dynamically.

Page 113

There is no way to 'edit' a relation in a designer, as editing a relation would mean: create a different relation, discard the old one. Because relations can be present in Typed Lists, users first have to remove the old relation. Editing the field mapped on the relation can be done on the Fields mapped on relations sub tab.

Adding m:n relations
When there are a couple of 1:n and m:1 relations present in your project, you can add m:n relations. If you haven't checked the 'Automatically detect new m:n relations' checkbox when you added a new relation, it is possible that you actually can add an m:n relation which isn't present in the system. Below is a screenshot of the custom m:n relation editor:

Cu stom m:n relation editor

It lists the two relations required for the m:n relation to create. Because we created two relations from AdminRightsPerAdministrator, an m:n relation can be created between AdminRight and Administrator. The screenshot shows that relation. You again have the option to create the m:n relation also in the related entity, the name of the field mapped on this relation and the ability to keep the editor open to add multiple relations more easier. Please note that selfservicing requires relations to be present at both sides (i.e.: Customer - Order and Order - Customer have to be present).
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 114

Designer - Adding typed lists
Preface
As mentioned in Concepts - Entities, lists and views , a Typed List is a list of entity fields from a set of entities which have 1:1, 1:n or m:1 relations which each other. This means that when there are no entities added to the project, you can't create a Typed List.

Adding a Typed List
To add a Typed List to your project you take the following steps: In the Project Explorer , right-click 'Typed Lists' and choose 'New Typed List', or you select 'Add New Typed List' from the project menu A little screen will pop up, in which you can name the Typed List. Please note that this name has to be CLS Compliant, meaning it has to be usable as a variable name and cannot contain special characters or spaces. Also, this name should be unique among typed lists. After clicking 'ok' the Typed List will be added to your project explorer and the editor for the new Typed List will be opened in the tabbed area . Now you can begin constructing the Typed List. (This editor is also opened when you select Edit / Properties... in the context menu that pops up when you right click a Typed List in the project explorer.) You can also create a copy of an existing Typed List by right-clicking an existing Typed List list in the Project Explorer and then select the option Create New As Copy from the context menu.

Note : It's not recommended to specify a name for a typed list which matches an entity's name or a typed view's name.

Constructing a Typed List
As with the entity editor, the Typed List editor is also divided in sub tabs. The Typed List editor has the following three sub tabs: Entity selection, Fields mapped on entity fields and Custom properties. The sub tabs can be navigated at the bottom of the Typed List editor tab. Besides these sub tabs, the editor allows you to alter the name of the Typed list.

Entities selection sub tab
The first sub tab is Entities selection. This is the sub tab which allows you to manage the entities in the Typed List, the aliasses used for these entities, the relations used and the join type to use for these relations. A screenshot of this sub tab of a CustomerAddresses Typed List is shown below.

Page 115

Entities selection sub tab of the Typed List editor

To construct a Typed List, you start on the Entities selection sub tab. In the upper left corner of this tab, a list of all entities in the project is shown. Select the entity to start your list with and click Add to add the entity to the Typed List. It will be placed in the top right list of entities which are currently in the Typed List, without an alias. You don't have to alias an entity if an entity is added just once to the Typed List. In the screenshot above, Address is added twice, once for the Visiting Address and once for the Billing Address. Aliasses are specified to distinguish them in the Typed List. You have to use these aliasses as well in filters when you're going to use the generated code. Throughout the Typed List editor, an aliassed entity is referred to by its alias, otherwise by its normal name. All entities in the list which have a relation with one or more of the entities in the Typed List (1:1, m:1 or 1:n) are selectable, all other entities are greyed out. Normally, you first add all the entities you want to include in the Typed List. On the entities selection tab you also see the relations used to form the Typed List. You can view a relation more in detail and also select an alternative, if there is one available. An alternative relation is available when the current relation is removable from the Typed List in favor of the alternative and the entities in the Typed List will still be related to each other without having one or more entities 'orphaned' from the rest: it has to be possible to navigate from one entity to another in the Typed List over the relations in the Typed List. To remove an entity from the Typed List, you can only select an entity which will not break a relation chain. This means that when an entity is removed all entities left in the Typed List are still related to each other in one way or the other. Example: Customer, Order and Employee are added to the Typed List. Order is related to both Customer and Employee, which by themselves don't have a 1:1, 1:m or n:1 relation with each other. Removing 'Order' would break the relation chain, because the entities left would not have a relation with each other and the Typed List can then not be created.

Page 116

Important : Keep in mind that the order in which you add the entities to the Typed List is controlling which relations are used by the Typed List: when you add customer first and order later, the relation will be Customer - Order (1:n). When you add order first and customer later, the relation will be Order Customer (m:1). This is important because Customer - Order is a weak relation, while the relation Order - Customer (m:1) is not. This can be of influence when you want to use the weak relations support in your code. See for more information about weak relations this section . If an entity has more than one relation with another entity in the typed list, you will be able to select an alternative relation for the currently selected relation. By clicking the set button, the then selected alternative relation is used instead of the current selected relation. To be able to specify how the relations should be joined in SQL, you can specify a join type per relation: None (default, results in inner join), left, right, cross or inner. Be aware that a cross join can be performance intensive.

Fields mapped on entity fields sub tab
The second tab is the Fields mapped on entity fields sub tab. This sub tab contains the list of fields in all the entities added to the Typed List (the All available fields list) and a list of fields selected from that fields list, which are the fields making the Typed List (the fields in Typed List list). By default, no fields are in the Typed List, you have to select the fields you want in the Typed List from the All available fields list by checking their checkboxes. In a way this is selecting the fields in the SELECT clause of an SQL query with a multi-table join FROM clause. A screenshot of this sub tab of a OrderCustomer Typed List is shown below.

Page 117

Fields mapped on en tity fields sub tab of the Typed List editor

To add / remove a field from the Typed List, simply check / uncheck the field's checkbox in the All available fields grid. The logic will create an alias for the field automatically using the entity field name. Normally this is enough, however if you want to alter this, you can by selecting the field in the list and altering the name in the Alias to use textbox. The alias for a field can be specified directly in the Field Alias column of the Fields in Typed List grid, as well as the caption column values. The caption field can be used at runtime to specify grid column header names, though you can also use custom properties for that. Per field added to the Typed List, you can specify an aggregate function. This can be handy if you want to use a group by collection at runtime. Group by collections aren't included in the designer to give you the flexibility to define multiple group by collections. This sub tab furthermore contains an area to specify custom properties for the selected field. You can add new ones, delete the existing custom properties or edit the custom properties. Custom properties are namevalue pairs (both strings) which are generated into the code in a static hashtable. See for more information about using the custom properties available in the generated code: Generated code - Custom properties . If you want to change the order of the fields in a Typed List, you can do that by selecting the field in the fields in Typed List grid and click the Move Up, Move Down buttons. You can also click with the right mouse

Page 118

button in the project explorer on the field to move up or down and then selecting the action to take (move up or down) from the context menu.

Custom properties sub tab
The third sub tab contains the custom property editor for the Typed List. You can add new ones, delete the existing custom properties or edit the custom properties. Custom properties are name-value pairs (both strings) which are generated into the code in a static hashtable. See for more information about using the custom properties available in the generated code: Generated code - Custom properties . A screenshot of the Typed List custom properties sub tab is shown below

Cu stom properties sub tab of the entity editor

As with the entity field editor, when you change something in your Typed List, the changes are automatically propagated to all other elements in the GUI, showing the information changed. Also, when you have a Typed List editor open and you remove an entity which is part of a Typed List, you'll see the change in the Typed List editor immediately.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 119

Designer - Adding typed views
Preface
As mentioned in Concepts - Entities, lists and views, a typed view is a 1:1 map on existing views in your catalog(s) / schema(s). When there are no views defined in the catalog schemas, you can't add a typed view.

Adding a typed view
If there are existing views in your catalog(s), you can select 'Typed views' in your Project Explorer, and choose 'New Typed View' using your right mouse button. You can also select 'Add New Typed View' from the Project menu. A screen will pop up, showing all database views in the catalog which don't have a corresponding typed view in your project. By default, all possible views are checked to be added to the project, except system views You can check / uncheck typed views by either using your mouse, or by stepping through them using your keyboard arrows. The spacebar will toggle the checkbox belonging to the selected entity on and off. You can also select one ore more rows in the grid and click 'Toggle checkboxes of selected rows', which will toggle the checkboxes of the rows you selected. Each row that is checked will be added to the project as a typed view. Keep in mind that all names in an LLBLGen Pro project have their first character capitalized, need to be CLSCompliant and should be usable as a C# identifier. This way each name can be used in the code safely. When you're done selecting, click 'Add to project'.

Note: Stored queries without parameters in MS Access will show up as views.

Note: It's not recommended to specify a name for a typed view which matches a typed list's name or an entity's name.

Editing a typed view
To edit a typed view, you click with the right mouse button on a typed view, and select 'Edit / Properties' or press Ctrl + E. A typed view editor opens in the tabbed area, showing the complete typed view and all editable elements on two sub tabs: Fields mapped on view fields and Custom properties. The sub tabs can be navigated at the bottom of the Typed View editor tab. Besides these sub tabs, the editor allows you to alter the name of the typed view and view the target view in the catalog explorer.

Fields mapped on view fields sub tab
The first sub tab is Fields mapped on view fields. This is the sub tab which contains all typed view fields and information about the database fields, their types, sizes etc. A screenshot of this sub tab of the Invoices typed view is shown below.

Page 120

Fields mapped on view fields sub tab of the typed view editor

When you select a field in the list, you can change its name. This name, as with all names, can't contain spaces and will have its first character upper cased. Also, all field names have to be unique within a typed view. You can also specify a caption, which can be used at runtime to specify a column header in a databinding scenario (you have to set the column headers by hand, this isn't automatic). At the bottom of the tab, you see a combobox with known type converters which work on the .NET type of the selected field. In the screenshot above, one type converter is available for the .NET type of the field: System.Int32. To set the field's type converter to the selected type converter, click the Set button to assign the type converter to the field. As soon as a field has a type converter set, its .NET type changes to the .NET type of the type converter, and its last column in the fields grid is set to the type converter's name. To reset the field to it's actual .NET type, you click the Reset to default button at the bottom, while having the field selected. It furthermore contains an area to specify custom properties for the selected field. Custom properties are initially derived from Extended Properties (SqlServer) or description fields (Oracle, other databases) of the mapped view field. You can add new ones, delete the initially created custom properties or edit the custom

Page 121

properties. Custom properties are name-value pairs (both strings) which are generated into the code in a static hashtable. See for more information about using the custom properties available in the generated code: LLBLGen Pro generated code - Custom properties.

Note: You can press F4 when the focus is on the grid with fields to move the focus to the Field properties and F3 to move the focus from the Field properties to the grid with fields.

Custom properties sub tab
The second sub tab contains the custom property editor for the typed view. As with other custom properties (except the project and typed list custom properties), these are initially derived from the mapped view's Extended properties (SqlServer) or description fields (Oracle, other databases). You can add new ones, delete the initially created custom properties or edit the custom properties. Custom properties are namevalue pairs (both strings) which are generated into the code in a static hashtable. See for more information about using the custom properties available in the generated code: LLBLGen Pro generated code - Custom properties. A screenshot of the typed view's custom properties sub tab is shown below

Page 122

Cu stom properties sub tab of the typed view editor

All changes you'll make in the editor will be reflected in the project immediately. Also, everything is kept in sync, so when you change a typed view's name, the name is changed everywhere in the GUI directly. As you will notice, when you change something, the project's icon in the project explorer will change to the yellow/red dot, indicating there have been changes. In addition, in the status bar the remark that the project has unsaved changes will appear.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 123

Designer - Adding stored procedure calls
Preface
As mentioned in Concepts - Entities, lists and views, you can re-use existing code, defined in stored procedures in the target catalog(s) / schema(s). To add these stored procedures to your project, you go through the following steps:

Adding a stored procedure call
If you double click 'Stored Procedure Calls' in your Project explorer, a tree will be shown containing 'Retrieval Stored Procedure Calls' and 'Action Stored Procedure Calls'. Stored Procedure Calls that return one or more resultsets are Retrieval Stored Procedure Calls. All others are Action Stored Procedure Calls. To add a stored procedure call, right-click either category and select 'Add Stored Procedure Calls'. You can also select 'Add New Action Stored Procedure Calls' or 'Add New Retrieval Stored Procedure Calls' from the Project menu. If the catalog schemas do not contain any Action stored procedures or Retrieval stored procedures the menu option of that category is greyed out. A screen will pop up, showing all stored procedures in the catalog which are not yet present in your project and fall in the category of the type of stored procedure call you want to add: Action or Retrieval. By default, all possible stored procedure calls are checked to be added to the project. You can check / uncheck stored procedure calls by either using your mouse, or by stepping through them using your keyboard arrows. The spacebar will toggle the checkbox belonging to the selected stored procedure call on and off. You can also select one or more rows and click the 'Toggle checkboxes of selected rows' button to toggle the checkboxes of the selected rows. When you're done selecting, click 'Add to project'. After you've clicked 'Add to project' the call definitions are added to the project.

Note: There is no support for parameterized stored queries in MS Access, so no procedure call can be created when targeting MS Access.

Editing a stored procedure call
To edit a stored procedure call, you click with the right mouse button on a stored procedure call, and select 'Edit / Properties' or press Ctrl + E. A stored procedure call editor opens in the tabbed area, showing the complete stored procedure call and all editable elements on two sub tabs: Parameters and Custom properties. The sub tabs can be navigated at the bottom of the stored procedure call editor tab. Besides these sub tabs, the editor allows you to alter the name of the stored procedure call and view the target procedure in the catalog explorer.

Parameters sub tab
The first sub tab is Parameters. This is the sub tab which contains all input and output parameters of the stored procedure and their specifics: type, size, real name etc. A screenshot of this sub tab of the EmployeeSalesbyCountry stored procedure call is shown below.

Page 124

Parameters sub tab of the stored procedure call editor

When you select a parameter in the list, you can change its name. This name, as with all names, can't contain spaces and will have its first character upper cased. Also, all field names have to be unique within a stored procedure call definition. To mark a parameter as a nullable parameter, you can check its Is nullable checkbox. A stored procedure's parameters are generated as method parameters for the generated method representing the stored procedure call in code. When the .NET type of the parameter is a ValueType, it will be generated as a method parameter of type Nullable<T> / Nullable(Of T) when .NET 2.0 is the target platform. When the target platform is .NET 1.x, the .NET type of the parameter in the generated stored procedure call method's signature will be object.

Custom properties sub tab
The second sub tab contains the custom property editor for the stored procedure call. As with other custom properties (except the project and typed list custom properties), these are initially derived from the mapped view's Extended properties (SqlServer) or description fields (Oracle, other databases). You can add new ones, delete the initially created custom properties or edit the custom properties. Custom properties are name-value pairs (both strings) which are generated into the code in a static hashtable. See for more information about using the custom properties available in the generated code: LLBLGen Pro generated code - Custom properties. A screenshot of the stored procedure call's custom properties sub tab is shown below

Page 125

Cu stom properties sub tab of the stored procedure call editor

All changes you'll make in the editor will be reflected in the project immediately. Also, everything is kept in sync, so when you change a stored procedure call's name, the name is changed everywhere in the GUI directly. As you will notice, when you change something, the project's icon in the project explorer will change to the yellow/red dot, indicating there have been changes. In addition, in the status bar the remark that the project has unsaved changes will appear.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 126

Designer - Defining and using type conversion definitions
Preface
As discussed in Concepts - Type Converters, the easiest way to work with type converters in the designer is to define in an existing project a set of type conversion definitions. This section guides you through the steps to create a basic type converter definition using the shipped type converter for converting int/byte etc. to System.Boolean and back.

Creating a new / editing an existing type conversion definition
After loading a project, you can open the type conversion definition dialog by selecting "Edit Type Conversion Definitions..." from the context menu when you right-click the project node in the Project Explorer, or select it from the Project menu in the LLBLGen Pro gui. If no type conversion definition currently exists, you'll see this dialog:

By clicking Add new..., you'll be able to define a new type conversion definition using a new dialog:

Page 127

As type conversion definitions are used to set type converters on fields, the first thing you've to specify is the original type the type conversion has to work on. This type is then used to determine which type converters accept values of that type for conversion. As our example type converter accepts int/byte etc., we selected Int32 as the type to work on, and LLBLGen Pro will then enlist the type converters found which accept that type. In our situation, we select the BooleanNumericConverter type converter. After clicking OK, you'll see a type conversion definition in the listview at top of the main type conversion definition dialog. This type conversion definition is already usable, though when used, it matches all fields which have an original .NET type of System.Int32. If this type conversion definition is supposed to be applied automatically, it will match perhaps with fields which shouldn't get the BooleanNumericConverter as type converter. To solve that, you can specify up to four filters which can be enabled by checking their specific checkbox: Database type. With this filter you can define on which Database specific type, like NUMBER, or unique_identifier, the type conversion definition has to work on. The types listed in the drop down box are all the database types supported by the project's database driver. Note: it can be a type isn't listed though the target database supports it, for example INTEGER on Oracle. These types are synonyms for existing types and you should pick the real database type instead. Length. With this filter you can define what the length setting of the field has to be to let it match with the type conversion definition. Use this only for type conversion definitions which have a non-numeric type to work on, like string. Precision With this filter you can define what the precision setting of the field has to be to it match with the type conversion definition. Use this only for type conversion definitions which have a numeric type to work on, like Int32. Scale With this filter you can define what the precision setting of the field has to be to it match with the type conversion definition. Use this only for type conversion definitions which have a fractional numeric type to work on, like Single. All filters are used on top of the filter on the type the type conversion definition works on. This means that selecting a database type which won't result in the .NET type the type conversion definition works on, the type conversion definition will never match any field. When working on fields to apply type conversion definitions, the type conversion definition which matches the most filters is selected. If two or more type conversion definitions match a given field, the one which is found first, is selected. When you're satisfied with your type conversion definitions, click OK to close the form. Don't forget to save your project.

Page 128

Note: Deleting a type conversion definition doesn't remove the set type converter definitions on fields. It only removes the type conversion definition.

Using existing type conversion definition
Type conversions are meant to make it very easy, even automatic, to set type converters on fields. For example, in a large Oracle project, you'd likely define a type conversion definition using the BooleanNumericConverter which applies to NUMBER, and precision 1 and scale 0. This then makes sure that only fields which are mapped onto target fields with database type NUMBER(1,0) will get the type conversion definition applied and thus get the BooleanNumericConverter set as their type converter. The type conversion definition can then be used to automatically apply the BooleanNumericConverter to any field matching the filters set for the type conversion definition.

Setting type converters by using a plug-in
LLBLGen Pro comes with a plug-in, which is able to apply the checked type conversion definitions by searching through all the fields in the selected entities and typed views. This plug-in is called "Assign type converters plug-in", and can be activated through the context menu on the "Entities" or "Typed Views" node in the project explorer and then by selecting the plugin from the Run Plug-in context menu, or by rightclicking an entity or typed view and then by selecting the plug-in from the Run Plug-in context menu.

Setting type converters automatically
In a larger project, you don't want to run plug-ins from time to time, you want the designer to take care of the application of the type conversion definitions automatically, so that when you add a new entity, typed view or through a refresh of the catalog(s) new fields are added / new entities are added, the necessary type converters are set automatically. To achieve this, you just have to set a preference setting, which is inherited by a new project (so if you change the preference while having a project loaded, you have to change it also in the project properties), to true: set AutoAssignTypeConverterToNewField to true, and LLBLGen Pro will make sure that all type conversion definitions are matched to any new field's type and db type definition, and the best match is selected and used to set the type converter for that field. If no type conversion definition matches, no type converter is set and the field keeps its original .NET type supplied by the used database driver.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 129

Designer - Inheritance mapping
Preface
LLBLGen Pro offers full inheritance mapping starting with version 1.0.2005.1. To be able to map inheritance hierarchies in an easy, productive way, the designer offers various ways to create inheritance hierarchies, which are discussed in the following paragraphs. It's important you've read the section Concepts - Entity inheritance and relational models so you're familiar with the names used in this section. Not only the creation of hierarchies is discussed, also the destruction of (parts of) the hierarchies is described.

Creating hierarchies of type TargetPerEntity
To create a hierarchy of type TargetPerEntity, you have two options. One is the option to let LLBLGen Pro find all hierarchies of TargetPerEntity in the entities in the project. To start this option, right-click Entities in the project explorer and select Construct 'Target-per-entity' Hierarchies as shown in the following screenshot.

Creating all TargetPerEntity hierarchies in one go, before

After you've selected the option, LLBLGen Pro will find all hierarchies of type TargetPerEntity and will construct them for you. On the example database in the screenshot, one hierarchy was found, Employee <Manager <- BoardMember, Employee <- Clerk, which gives the following project explorer overview

Page 130

Creating all TargetPerEntity hierarchies in one go, after

Which is the hierarchy shown in the screenshot for the Inheritance sub tab in the section Adding and editing entities. The automatical construction of entity hierarchies rejects entities which are in a typed list so the user won't run into the situation where the hierarchy has to be destroyed which isn't allowed for entities which are in a typedlist. A warning is shown which entities were rejected and why. A warning is also shown when an entity is made a subtype of another entity manually and one or both are in a typed list. Another option to create a hierarchy of type TargetPerEntity is to build the hierarchy per entity. You activate this option by right-clicking an entity and then selecting the context menu option Make Sub-type of which will show you an entity type which is a candidate to be a supertype for the currently selected entity. See the screenshot below for an example.

Page 131

Making an entity a s ubtype of another entity in a hierarchy of type TargetPerEntity

After selecting the suggested supertype from the context menu, LLBLGen Pro will create the hierarchy for you and make the selected entity in the project explorer a subtype of the entity selected from the context menu.

Creating hierarchies of type TargetPerEntityHierarchy
Hierarchies of type TargetPerEntityHierarchy are all mapped, per hierarchy, onto the same target. This means that creating a hierarchy of type TargetPerEntityHierarchy is done differently than the creation of hierarchies of type TargetPerEntity, as described in the previous paragraph. To create a hierarchy of type TargetPerEntityHierarchy, you right-click in the project explorer the entity which will become the supertype of a newly created subtype, and then you select the option Create Sub-type For This Entity from the context menu, as shown in the following screenshot

Page 132

Creating a subtype of another en tity in a hierarchy of type TargetPerEntityHierarchy

Selecting that option will bring up the following dialog which will allow you to specify the discriminator values and entity name for the subtype. If the entity you right-clicked in the project explorer isn't in a hierarchy of type TargetPerEntityHierarchy, you also have to specify the discriminator value for that entity.

Page 133

Dialog for creating a subtype of an other entity in a hierarchy of type TargetPerEn tityHierarchy

The screenshot above shows that the discriminator column is CarType, which is of type System.Int32. The selected discriminator column's type is leading for which discriminator values are allowed. LLBLGen Pro will list all columns which have as .NET type byte/int16/int32/int64/Guid/Decimal and string. All known types in the hierarchy are also shown in the dialog to help you specify unique values for the discriminator values. In the example above, the hierarchy is new, and no types are defined yet. By clicking OK, the subtype will be created as a new entity, mapped onto the same target as the supertype, and the new entity is defined as a subtype of the chosen supertype.

Note: Discriminator fields shouldn't be foreign key fields. The reason for this is that by setting a different related entity, the type of the entity could change because the discriminator field value changes, which is impossible as that would mean the entity object in memory should also change type dynamically.

LLBLGen Pro will automatically unmap in the supertype all fields which are nullable, if the supertype is the root of the hierarchy. This is to help you setup the hierarchy in less time, as in TargetPerEntityHierarchy hierarchies, it's typical that fields mapped in subtypes are nullable in the target. No fields are mapped in the newly created subtype, you've to specify them yourself in the entity editor. See Designer - Adding and editing entities. Besides right-clicking the supertype entity in the project explorer, you can also right-click an entity in the hierarchy view on an entity's Inheritance info tab, in the entity editor. See for more details Designer Adding and editing entities.

Page 134

Viewing hierarchies
After a hierarchy is created, you can view the hierarchy in the project explorer by clicking open the entity nodes of entities in the hierarchy and then by expanding the Sub-types nodes as shown in the following screenshot.

Hierarchy in project explorer

Another way of viewing the hierarchy is to open the entity editor of one of the entities in the hierarchy, which will show the complete hierarchy in the Inheritance info tab of the entity editor, as shown in this section: Designer - Adding and editing entities

Destroying hierarchies
Creating hierarchies also means that from time to time, hierarchies have to be destroyed or parts of hierarchies have to be removed. Destroying a hierarchy isn't the same as deleting an entity from the project: by deleting an entity from the project, it's automatically removed from the hierarchies it's in, and the subtypes of the entity to delete are automatically removed from the hierarchy as well, however removing an entity from a hierarchy simply means that the entity isn't in the hierarchy after the action has been completed, it will become a separate entity in the project. An entity which isn't a leaf in the hierarchy (i.e.: it has at least one subtype), can't be removed from the hierarchy alone: all its subtypes have to be removed as well. To do so, you right-click in the project explorer the entity from which you want the hierarchy to be destroyed and select Destroy Hierarchy, Starting With This Entity from the context menu, as shown in the following screenshot.

Page 135

Destroying a hierarchy

After selecting this option, all entities below the selected entity and the selected entity will be removed from the hierarchy they're in and will become separate entities in the project. You can't undo this option. This option is also selectable from the context menu of the hierarchy viewer on the Inheritance info tab in the entity editor. If you right-click an entity which is a leaf, e.g. it doesn't have any subtypes, you can remove it from the hierarchy its in by simply selecting Remove From Hierarchy, which will make it a separate entity in the project. You can't undo this option. If the leaf entity was in a hierarchy of type TargetPerEntityHierarchy you can't re-add it to the hierarchy after the action.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 136

Designer - Working with plug-ins
Preface
LLBLGen Pro supports a flexible plug-in system which allows developers to extend the existing functionality of the designer. Plug-ins can target single elements, multiple elements or both, designer events or be a system plug-in and act like a part of the designer. They can run as a pop-up dialog but they can also run as part of the designer in a tab, like the entity editor. LLBLGen Pro ships with a variety of plug-ins, from plugins to add/remove interfaces or custom properties to a full project inspector plug-in which runs as a tab inside the designer and which allows you to inspect the full object model of an LLBLGen Pro project to pluralization and singularization plug-ins. This section describes how to use plug-ins in the designer. The LLBLGen Pro SDK offers a more in-depth look how to write your own plug-ins. The LLBLGen Pro SDK is available for customers free of charge and comes with the sourcecode for all the plug-ins shipped with LLBLGen Pro.

Viewing installed plug-ins
Plug-ins are implemented in any .NET language and derive from the generic Plugin base class in the LLBLGen Pro ApplicationCore assembly. Assemblies with plug-in implementations are placed in the Plugins folder in the LLBLGen Pro installation folder. At startup, LLBLGen Pro examines these assemblies and will make sure that every plug-in available is runnable by the user. To examine which plug-ins are available to you, what the types are they target, and versioning information, you can view them in the dialog popped up after you select Tools - > Plug-ins. The following screenshot shows an example of this dialog.

Page 137

The view available plug-ins dialog

A plug-in can be of the following types: Single element. This is a plug-in which can only target a single element, like a single entity Multi element. This is a plug-in which can target one or more elements and which is always run with the object selector (see below, running plug-ins on multiple targets) Single element and multi element. This is a plug-in which can do both single and multi element actions DirectRun. This is a plug-in which won't pop-up the plug-in run dialog, but will start automatically once selected from the plug-in menu on a target. An example of this is the project inspector plug-in which is available on the project node. (Right-click the project node in the Project explorer, select Run plug-in -> Project inspector. It will directly open a tab with the project inspector in the tab area of the designer. Be a handler of a Designer Event. Designer events are events raised by the designer in a wide range of occasions and to which you can bind a plug-in to handle the event or even cancel the whole action raising the event. See the paragraph about Designer Events below for more details. The type of the plug-in is set by the plug-in itself. You can't change it in the plug-in viewer dialog.

Running plug-ins on a single target
Running a plug-in on a project element is easy: just right-click the element in the project explorer, which can be an entity, typed list, typed view or a stored procedure call, and select from the context menu Run

Page 138

Plug-in and select the plug-in to run. The Run Plug-in dialog will pop up, allowing you to specify specific settings for the plug-in before it is run. It depends on the plug-in which settings you'll have to set. The shipped plug-in shows the following dialog:

Ru n th e Add Custom Properties plug-in on a single element

By clicking the Run! button, the plug-in is executed on the element selected. A progress bar is shown to give feedback of the progress of the plug-in execution.

Running plug-ins on multiple targets
Running a plug-in on multiple elements starts by the same actions as running a plug-in on a single element: right-click a node in the project explorer, which can be the project node, the 'Entities' node, the 'Typed Lists' node, the 'Typed Views' node, the 'Retrieval Stored Procedure Calls' node or the 'Action Stored Procedure Calls' node. The Run Plug-in dialog will now have two tabs, instead of one. The first contains the object selector which is also available to you in the Generator configuration window. The second tab contains the same information as seen in the single target run of a plug-in: the settings for the plug-in to run. The following screenshot shows this dialog:

Page 139

Ru n th e Add Custom Properties plug-in on multiple elements

The object selector is discussed more in detail in the following section.

selecting participating objects
To select the elements to run the plug-in on, LLBLGen Pro uses an object selector component, which is also used in the generator configuration window when you click 'Select participating objects'. It allows you to select one or more elements by checking their checkboxes and store that selection under a name, called a group, update an existing group with the selection made, remove a previously created group, or simply select a previously created group as the set of elements to select. Groups are stored with the project and can be used whenever the object selector is used. The object selector also offers automatic selection of elements related to a selected element, based on 1:1/m:1 relations or m:n relations. It furthermore selects entities in a typed list if the typed list is selected.

Designer Events and plug-ins
LLBLGen Pro's designer has the ability to call plug-ins when some event occurs, a so called designer event. Designer events are very powerful as they can automate a lot of thing in the designer. There are two types of Designer Events: cancelable designer events and normal designer events. The cancelable designer events are raised by actions which can be canceled by the result of the event, provided by the handler of the event. This way, a plug-in could verify if an action can take place and deny it if some check fails. Plug-ins can be bound to designer events. This means that a plug-in is executed when the event is raised. This binding can be created by the plug-in itself: plug-ins can auto subscribe to designer events. The designer ships with one example of that: the Project Verifier plug-in, which is bound to the designer event

Page 140

CodeGenerationBeforeStart automatically. The plug-in verifies if the project is valid before code-generation can take place. If something is wrong which would lead to incompilable code, the plug-in will cancel the action (so the start of the code generation will be aborted) and the result will be displayed. Auto-subscribed plug-ins are always subscribed to that event, you can't remove them from the event. You can also bind a plug-in manually to a designer event. To do that, select Tools -> Bind Designer Events to Plug-ins... The following dialog will pop-up:

Bind designer events to plug-ins

This dialog will allow you to select a designer event and per event to bind one or more of the plug-ins available for binding to that event. In theory, you can bind any of the shipped plug-ins to designer events, if the event is related to one of the targets the plug-in works on. In practise only the plug-ins which are really meant to be plug-ins for designer events are really useful for using with designer events. Two of them are the pluralization and singularization plug-ins, discussed in the next paragraph. For more details about designer events and how to write a plug-in to handle these events, please consult the LLBLGen Pro SDK documentation and sourcecode.

Setting up pluralization and singularization of names
Two of the designer events which can be used directly with the shipped plug-ins are NamePluralToSingularConversion and NameSingularToPluralConversion, which are the events for

Page 141

resp. singularization of a name and pluralization of a name. Using the dialog to bind designer events to plug-ins you can bind the Plural to Singular converter plug-in to the NamePluralToSingularConversion event and the Singular to Plural converter plug-in to the NameSingularToPluralConversion event. An example of that is shown in the above screenshot. These plug-ins are based on the Castle Project's Inspector class for singularization/pluralization of English names. As this is an Open Source project, the sourcecode of this class is available in the SDK sourcecode, available to all LLBLGen Pro customers in the customer area. If you're using a different language, you can change this class or replace it entirely with a class which can handle the pluralization / singularization rules for your language to create a different plug-in for your language and bind that plug-in to the two conversion designer events instead. After you've bound these two events to their plug-ins, LLBLGen Pro is able to pluralize and singularize names it encounters and has to format for you. There are a couple of spots in the designer where singularization has to take place and pluralization has to take place. For entity names, discovered from targets, the singularization plugin is always called. If it’s not there, the name to process didn’t change, so nothing changed. For 1:n relations, the field could be constructed by pluralizing the related entity name. This is also the case with m:n relations. To specify this, you should use the macro for that in the FieldMappedOn*Patterns (See for more details about these patterns the Designer Preferences and Project properties section). Standard, the macro is $EndEntityName. The plural equivalent is $EndEntityName$P. When that $P macro is specified as a suffix, the event is raised for pluralification of the entity name of the end entity. There’s also a singular macro: $EndEntityName$S. The $S and $P macro also work on IntermediateEntity.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 142

Designer - Refreshing the catalog schemas
Preface
If changes were made to the catalog schemas after you've created your project (e.g. you've added a table or a relation, renamed a field, or removed a table or view) you will need to refresh the catalog schemas in a project to start working with the new schema set. This is because LLBLGen Pro only makes a snapshot of your database schemas while creating the project, enabling you to work on your project without being connected to the database. Use the refresh functionality with care: refreshing the catalog can have severe impact on your project. If elements you use in your project have targets (like a table or view) which have been deleted from the catalog / schema, and these elements are used in other elements as well, for example an entity is also part of a typed list, this typed list is also modified, because the target of an entity has been removed from the catalog. By default, LLBLGen Pro will backup your current project before starting a catalog refresh action, so you can always return to the version prior to the refresh action. If you don't want to have a backup created, you can switch this off by setting CreateBackupBeforeRefresh to false in the user preferences. The filename used for this is the regular filename, appended with the suffix containing the exact date and time in the following format: ddmmyyyyhhMMss. 'm' is month, 'M' is minute. The folder into which the backup is placed is defined in the Preferences. If you have multiple catalogs in your project (SqlServer only), you can decide to refresh a single catalog (right-click on the catalog to refresh and select 'Refresh this catalog') or refresh all catalogs. The same options are available to you for an unattended refresh.

Refreshing the catalog schemas
Single catalog refresh, attended
To perform a single catalog refresh, you right-click on the catalog in the project explorer and you select 'Refresh this catalog'. If VerboseRefresh in the user preferences is set to true, you'll first see a warning screen, stating the possible problems that may arise when proceeded. If a project has unsaved changes, it is saved using the normal save procedure. After that, if CreateBackupBeforeRefresh is set to true in the user preferences, LLBLGen Pro will create a backup of the current project and the filename of that backup will be reported to you. The catalog refresh dialog pops up and allows you to specify login/connection details. If you want to refresh the catalog with a catalog with another name, first rename the catalog in the catalog explorer (SqlServer/DB2). You can again specify which elements to read from the schemas. If you have a project element based on a table, view or stored procedure, these elements will be refreshed always, and you can't uncheck them. If you want new catalogs / schemas to be imported into your project, you can check these as well. Click 'connect'. Click 'Retrieve schema(s)'. The schema's are retrieved and the project is migrated. After the migration has been completed, a full report will be presented to you, stating any changes made to your project, if you've set ShowReportAfterRefresh in the user preferences to true. If desired, you can save this report as .rtf or as xml for later reference. When the refresh succeeded, the project is refreshed in the Project Explorer and you can start using the new catalog(s). All changed objects in the project will now have their front/back color changed in the project explorer. If you've specified to automatically add new elements to the project, by setting AddNewElementsAfterRefresh and/or AddNewViewsAsEntitiesAfterRefresh to true in the user preferences, the new elements are added for you automatically. If you've specified to keep the names in the project in sync with the names in the catalog schemas, by setting SyncMappedElementNamesAfterRefresh to true, the

Page 143

names are synchronized for you automatically. If you are not satisfied with the changed project, you can abandon the project by closing it, but not saving it, and load the backup LLBLGen Pro made, if you've decided to create a backup. An example of the catalog refresh dialog is shown below.

Catalog refresh dialog

Multi catalog refresh, attended
To perform a multi catalog refresh, you right-click on the Catalogs node in the project explorer and you select 'Refresh All Catalogs'. You can also select Project -> Refresh All Catalogs... from the menu. Otherwise the steps are the same as for Single catalog refresh, attended.

Single catalog refresh, unattended
You can also perform an unattended refresh on a single catalog. Do start an unattended refresh, right click on the catalog to refresh in the project explorer and select 'Unattended refresh' from the context menu. An unattended refresh will automatically refresh the catalog for you using the last used connect credentials. It might be the password you used isn't yet stored in the project and the refresh fails the first time. If that's the case, perform an attended refresh first. If you've specified to manually select stored procedures, the stored procedure selector will be shown, regardless if you're running an unattended refresh. If you don't want this, switch off manual stored procedure selection in the user preferences. For further information about actions taken during a refresh, please see Single catalog refresh, attended.

Multi catalog refresh, unattended
To start a multi catalog unattended refresh, you right-click on the Catalogs node in the project explorer and you select 'Unattended Refresh All Catalogs'. You can also select Project -> Unattended Refresh All Catalogs from the menu. For the rest, the steps are the same as for Single catalog refresh, unattended.

Page 144

Changing catalog name/schema name for refresh
It can happen that during the project's lifecycle, the name of the catalog or schema changes, or the catalog is moved to another server and is called differently on that server. Using the mechanisms to refresh the existing catalog(s)/schema(s) will fail, as the catalog name(s) / schema name(s) won't be found. To be able to refresh your catalog or schema with the meta-data of a different catalog / schema, go to the catalog explorer, right click the catalog- or schema name and select Rename... or press F2. Specify the name of catalog / schema you want to use for the next refresh cycle and click OK. Now refresh your catalog.

Correcting mappings
When a catalog is refreshed, it can be that a table or view was renamed since the last time the catalog was refreshed or the project was created. During the migration of the entities, when LLBLGen Pro detects that a target table or view is not available under the old name, it will try to find back the table or view which is now known under a new name. However, it can be it makes a mistake, because for example two tables are renamed and they look alike, and it's then unclear which one to choose. A mistake in the mapping of an entity can lead to having an entity mapped onto the wrong table for example. (to get an overview of which entities are mapped on which tables/views, please see Designer - Adding and editing entities). Normally, these mistakes are rare, but it can be you run into them. You can order LLBLGen Pro to show a list of 'orphaned' entities, which are entities which targets aren't found in the new catalog information under their old names. To do this, set the preference setting ManuallySelectRenamedTargetsAfterRefresh to true. If this preference setting is set to true, and LLBLGen Pro detects one or more orphaned entities, it will show a dialog like the following screenshot. This dialog then lets you correct the mappings per entity.

Project element target selector

Per selected element you can select a target from the drop down box and by clicking Set, you set the entity to a new target. If you don't need an entity anymore, because its target has been deleted from the database schema, simply leave its new target empty and the entity will be removed after the refresh has been completed. By clicking 'OK', the refresh will proceed. Not shown during an unattended refresh, in which case the catalog refresher logic will use the default for ManuallySelectRenamedTargetsAfterRefresh: false.

Page 145

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 146

Designer - Generating code
Preface
Generating code is the end result of a design process in the designer. It will take your project definition and with the settings you set for the generation process, it will generate source code and/or produce other output, depending on which tasks you have selected to run and for example which templates you've selected to use. This section describes the elements you can configure for the generation process.

Configuring the generation process
When you press F7 or select 'Generate' in the Project menu or from its context menu, the 'generation process configure dialog' pops up to let you set up the generation process. This dialog consists of three main areas, where each area is placed onto a tab, accessable at the top of the dialog. These three areas are discussed below. When a code generation cycle is started by clicking the Start generator button, all settings you've specified in this dialog are preserved inside the project. If you want to re-use these settings when you have to re-generate the code, it's key to save your project after you've generated code. If you're generating code after you've already generated code before and you don't change anything, the project isn't marked as changed as well, so you don't have to save in that situation. Right after you've pressed F7 or clicked the 'Generate' menu option, LLBLGen Pro will raise the Designer event CodeGenerationBeforeStart. This event has an auto-bound plug-in, the Project Verifier plug-in. This plug-in is used to check for some errors in the project which could lead to compile problems. It currently checks for one problem, namely inherited Field mapped onto relation instances which have the same name as a Field mapped onto relation in the same entity. Example: Manager derives from Employee. Employee has a relation with Department (Employee.WorksForDepartmentID - Department.DepartmentID) and by default the field 'Department' is mapped onto that relation in Employee. Manager also has a relation with Department (Manager.ManagesDepartmentID - Department.DepartmentID) and that relation also has by default a field named 'Department' mapped onto it. This isn't overriding the inherited field Employee.Department, as that represents a different relation. As these things go undetected often, the Project Verifier tries to detect them before code generation takes place. If it finds such a problem as described above, it will cancel the code generation configuration and display the error in the log viewer so you can correct it. The plug-in can be extended by you by using the sourcecode of the plug-in from the SDK.

General settings tab
The General settings tab is the tab where you start with. It contains various parameters for the code generation process which will influence the contents and options available to you on the other two tabs. An example of the General settings tab is shown below:

Page 147

General settings tab

General execution parameters Start by selecting the Target language you want to use. This is the language the generated code will be in. The next step will be to select which Target platform you are generating code for. Make sure you select the right platform, as it influences the templates available to you and for example the visual studio project file formats. Not all platforms are available for all databases. For example, the Compact Framework platforms are only selectable if your project is based on a SqlServer catalog. .NET 2.0 is the default platform. After you've made the selection for the target platform, you've to specify the Root namespace in the root namespace textbox. This root namespace textbox is filled with the root namespace defined in the project properties. Say, you've defined it as "SD.Northwind", then the namespaces in the generated code all start with 'SD.Northwind', so the namespaces in the code will look like 'SD.Northwind.CollectionClasses' or 'SD.Northwind.Entities'.

Page 148

Note for VB.NET users: If you decide to add the generated code files manually to an existing VB.NET project in Visual Studio.NET, keep in mind that the VB.NET project in Visual Studio.NET has defined as its 'Root namespace' the project name and this name will be used as a prefix to all defined namespaces in the generated code. This will lead to non-compileable code. To avoid this, clear the 'Root namespace' textbox in the properties dialog of the VB.NET project in Visual Studio.NET. It's recommended however that you use the generated Visual Studio.NET project file, because it already is set up correctly and contains already the correct references. When the root namespace has been defined, you'll pick the template group you want to use for your generated code. If you're unfamiliar with what a template group is, please see Concepts - Templates and Template groups to inform yourself of the differences between the available template groups and which one you should select for your project. At tab 3, Task queue to execute, you can specify additional root namespaces and destination folders per task group. This is described more in detail below. With the button Create participating object subset... you can create a subset of the objects to generate code for. By default, the complete project is used for code generation so you don't need to specify anything and you can safely leave this button alone. However some scenario's, for example small tests, may require that you create a subset of the project elements and generate code for these elements. To specify a subset, click on the Create participating object subset... button. This button opens a form with the object selector already discussed in the plug-ins section. For more information about this object selector, please see: selecting participating objects. The object selector opened when the Create participating object subset... button is clicked shows no object being selected. This gives you a clean slate to define participating objects. If you decide not to use a subset, simply click Cancel. If you don't check any object and click OK, a warning is shown to inform you that you've created an empty subset and you can then decide to discard this empty subset and use the whole project, or use the empty subset. You can also manage the subsets by right-clicking the project node in project explorer and then by selecting Manage object groups from the context menu, or by selecting Manage object groups from the Project menu in the LLBLGen Pro designer menu. Code file parameters The generated code has to be stored in a directory, and you can define that directory in the Destination root folder textbox. Be careful which directory you choose, because all code files, as well as each directory created by the task performers, are created in that directory or subdirectories of that directory. You can specify a relative path from the location the project file was loaded from. For example, if you have a folder called 'MyBigProject' and in that folder you've defined two folders: LLBLGenProProject and VS.NETProject, the LLBLGen Pro project file is located in the LLBLGenProProject folder. To generate code in the VS.NETProject folder, simply specify ..\VS.NETProject as the Destination root folder. Paths starting with '..\' or '.\' (without the quotes) are recognized as relative paths. You'll notice you're now able to generate code as the Start generator button is enabled. Though it's recommended to examine the other two tabs' contents as well before proceeding, at least the first time you generate code for a project.

Template bindings tab
The Template bindings tab is the tab where you define the precedence of the template bindings to use. An example of the template bindings tab is shown below:

Page 149

Template bin dings tab

As discussed in Concepts - Templates and Template groups, LLBLGen Pro uses template files which are bound to a TemplateID. These bindings are defined in so called template bindings files. On the Template bindings tab you'll see all found template bindings files and their contained TemplateID - template file bindings for the target platform you've chosen on the General settings tab and the target database type. TemplateID precedence LLBLGen Pro ships with a standard set of templates and accompanying template bindings files. For some templates there are multiple versions, like the ResultsetFields templates: one version which generates code as it was done in LLBLGen Pro v1.0.2005.1 and one which is slightly modified and generates different code which is more compact, and which is new for V2.x of LLBLGen Pro. The former template, the one which generates code like v1.0.2005.1, is bound to the particular templateIDs in the templatebindings file called "SD.TemplateBindings.SharedTemplates.BackwardsCompatibility.NETxx", where xx is the target platform version number. The latter is defined in the standard templatebindings file called "SD.TemplateBindings.SharedTemplates.NETxx". Both bind different files to the same TemplateIDs.To place one templatebindings file above the other one, it overrules the templatebindings to the same TemplateIDs. In the screenshot above, the backwards compatibility templatebindings file is placed above the standard shared templates templatebindings, which

Page 150

means that for the TemplateIDs which are in both templatebindings files (in this case SD_ResultsetFieldsAdapterTemplate and SD_ResultsetFieldsTemplate), the files bound in the backwards compatibility templatebindings file are used, because those templateIDs take precedence over the ones in templatebindings files placed below it. You can change the precedence of a templatebindings file by selecting it and then by clicking the move up/down buttons at the right of the list. The order in which the templatebindings are placed at the time the code generation is started is preserved in the project file. You can use this mechanism also to overrule a template binding in the standard bindings to use your own version of a shipped template. See for more information about how to create your own templatebindings files the LLBLGen Pro SDK documentation.

Task queue to execute tab
The Task queue to execute tab is the tab where you select and define the different tasks which should be executed during code generation. If you're unfamiliar with the concept of tasks and taskgroups, please see: Concepts - task based code generation. Two examples of the Task queue to execute tab is shown below:

Page 151

Task queue to execute tab with task group selected

Task queue to execute tab with a task selected

At the top of the Task queue to execute tab, you're able to select your preset of choice. A preset, as discussed in the aformentioned Concepts - task based code generation, is a definition of a run queue, with tasks and task groups in the right order, with the right values for the task parameters. When you select a task group in the run queue (task groups have a blue color and a different icon, and are used to group tasks together), you'll see the Selected task group information below the run queue as shown in the first Task queue to execute tab screenshot. This area allows you to specify an additional root namespace and additional destination folder as suffix for the already known root namespace and destination folder for that

Page 152

task group. This information is stored inside the preset and nested groups will use the information of their parents. You can also specify a display name for task groups and tasks in the run queue which is persisted in the preset. These names could help you understand better which tasks do what exactly if you for example add multiple times the same task but with different parameters. To change the display name for a task or task group, you can also select it in the run queue tree and press F2. You can create new presets if you like, by simply clicking the New... button or by changing an existing preset and then by clicking the Save As... button to save it under a different name. The shipped presets are sealed. This means that LLBLGen Pro doesn't let you alter them through the GUI to avoid you overwriting the shipped presets by mistake. You can also seal your own presets, by checking the checkbox on the save dialog as shown below:

The Save preset dialog

The run queue itself, the queue of tasks which will be executed when you click Start generator in a top-down fashion, so the tasks at the top of the queue will be executed first, is displayed as a tree, as you can see in the screenshot above. There are several ways to change the run queue of a given preset. First of all, you can add new tasks or taskgroups to the run queue. To do so, first select the task or taskgroup in the run queue which will mark the position where the newly added tasks will be inserted. Then you click the Add Tasks button. This button will pop up the following dialog:

Page 153

The Add Tasks dialog

In this dialog you check the checkboxes of any task or taskgroup you want to add to the run queue and then click OK. The tasks will be added with their default parameters. You might want to alter these, please see 'Altering an existing tasks' parameters' below for details on that. You can also decide to remove a task or taskgroup. To do that you select the task or taskgroup to delete from the run queue and then you click the Remove button. An alternative to removing a task is disabling a task. In the shipped presets, several tasks are disabled by default, like the tasks which generate the PredicateFactory and the SortClauseFactory classes. As these classes aren't used in LLBLGen Pro v2, they're disabled by default so they'll be skipped at generation time. To make it easier for users to include them in their run queue, for example because they're upgrading from v1.0.2005.1 code, the tasks are already added to the presets. The user then just has to select them and check the Is enabled checkbox in the Selected task information area. To change the order in which tasks are executed, you can use the Move up and Move down buttons. These work on the selected task or taskgroup. To merge another preset with the one loaded in the run queue, at the spot currently selected, click the Merge preset button. A dialog will pop up with a drop-down list with all the presets you can merge at that spot. By clicking OK, the preset you've selected will be merged at the spot you had selected in the run queue. Altering an existing tasks' parameters By selecting a task in the run queue you can change its parameters in the Selected task information. Which parameters are available depends on the task and the associated Task performer class. You can see which task performer class is associated with a task by expanding the task node in the run queue.

Page 154

In general you don't need to alter parameters, or add tasks to use LLBLGen Pro: the shipped presets are enough to generate code. If you want to customize the generation process, it's recommended you read the LLBLGen Pro SDK documentation which contains more detailed information about the various parameters of the tasks available. Most task parameters are straight forward. When you select a parameter in the Parameters grid, the bottom Description area shows the description of the selected parameter so you know what to specify as a value.

Starting the generation process
When you are done with the configuration of the generation process, you can start the generator by clicking 'Start generator'. The generator and the task performers will log the outcome of their actions in the application output window, so it can be helpful to examine that output after the generation process has been completed to check if the generator has performed its actions as you had planned it should. The generator will also show a detailed generation report to you and will show errors in red. If you've set ShowTaskPerformerReport to false in the user preferences, this report is not shown. You can save the report contents as .rtf or as xml.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 155

Generated code - Compiling the code
Preface
After you've generated code, you of course want to use the code in compiled form so you can reference it from another project and actually use it. This section describes how to compile the generated code for C# and VB.NET, which references to add and how to use it in your own projects.

Compiling
The easiest way to compile the generated code is to load the generated Visual Studio.NET project file(s) into Visual Studio.NET or another IDE which can read these files and compile the project. This will automatically create a class library for you which you can use immediately. If you decide to construct the project manually, create a new library project and add all the generated classes to that project. The references you have to make are identical for VB.NET and C# and are described below.

Note : .NET 3.0 uses the .NET 2.0 framework and CLR internally so the .NET 2.0 builds of our code are also the ones to use for .NET 3.0. .NET 3.5 is also mainly .NET 2.0 based, so our .NET 2.0 dlls are fully usable on .NET 3.5 without issues. The generated code references the following assemblies and you should add to your project references to these assemblies if they're not yet present . It's highly recommended to use the generated VS.NET projects which should automatically reference the correct assemblies. If you're upgrading to a newer version of LLBLGen Pro, it's recommended to check whether your project indeed references the correct runtime libraries, as VS.NET sometimes may point to previous versions if they're still installed on your system. After you have generated the code and checked whether the right references are available in your VS.NET project, you can compile the code into a working assembly. This assembly can then be referenced from your business logic project and other projects which want to use the generated functionality.

SD.LLBLGen.Pro.ORMSupportClasses assembly
LLBLGen Pro ships with three versions of this dll, one for .NET 1.0, one for .NET 1.1 and one for .NET 2.0/3.0/3.5. The version for .NET 2.0/3.0/3.5 is compiled against the latest .NET 2.0 public available build. If you are using an IDE like Visual Studio.NET v2003 or Borland C# builder, you are using .NET 1.1, and you have to use the .NET 1.1 version. If you are using Visual Studio.NET v2002, you are using .NET 1.0, and you have to use the .NET 1.0 version. If you're using VS.NET 2005 or VS.NET 2008, you can use the .NET 2.0 version. Which file to select for which version is easy: the .NET version is included in the filename: SD.LLBLGen.Pro.ORMSupportClasses.NETxy .dll for normal .NET applications, or SD.LLBLGen.Pro.ORMSupportClasses.CFxy .dll for Compact Framework applications. For the compact framework, you've to reference the CF10 version in case you're using CompactFramework .NET 1.0, the CF20 version for CompactFramework .NET 2.0 (shipped with VS.NET 2005) and the CF35 version for CompactFramework .NET 3.5 (shipped with VS.NET 2008). The ORMSupportClasses assembly is referenced by all generated code projects, and you also need to reference it from any project which uses the classes from the generated code.

SD.LLBLGen.Pro.DQE.yourDatabase assembly
All SQL is generated by a Dynamic Query Engine or DQE in short. These engines are database specific and each supported database has its own assembly: SD.LLBLGen.Pro.DQE.yourdatabase .NETxy .dll where yourDatabase is for example SqlServer, Oracle, Firebird etc. LLBLGen Pro ships with three versions of this dll per database, one for .NET 1.0, one for .NET 1.1 and one for .NET 2.0+, unless a given database isn't

Page 156

supported on a given .NET platform, for example Firebird is only supported on .NET 1.1 and .NET 2.0+, Sybase ASA is only supported on .NET 2.0+. For the Compact Framework .NET 1.0, 2.0 and 3.5, there's also a build of the SqlServer DQE for these platforms. MySql specific: If you're using MySql, and you're using v3.x of the CoreLab MySqlDirect provider instead of the v4 version, please reference the DQE dll for MySql from the MySqlDirectv355 folder in the DotNetxy folder of your .NET version in the RuntimeLibraries folder instead. This DQE is compiled against v3.55 of the CoreLab MySqlDirect provider. Oracle10g ODP.NET specific: If you're using ODP.NET v10.1.0.4 instead of the newer v10.2, please reference the DQE dll for Oracle ODP.NET 10g v10.1 from the Oracle10gv10104 folder in the DotNetxy folder of your .NET version in the RuntimeLibraries folder instead. This DQE is compiled against v10.1.0.4 of the Oracle 10g ODP.NET provider. If you're using ODP.NET 11g, you should be able to use the normal ODP.NET 10g driver. IBM DB2 specific: If you're using the IBM DB2 v8.1.2.1 provider instead of the newer v9.0.0.2, please reference the DQE dll for IBM DB2 v8.1.2.1 from the IBMDB2v8121 folder in the DotNetxy folder of your .NET version in the RuntimeLibraries folder instead. This DQE is compiled against v8.1.2.1 of the IBM DB2 provider. The DQE assembly has to be referenced in the generated code project for SelfServicing and the DatabaseSpecific project of Adapter.

SD.LLBLGen.Pro.LinqSupportClasses.NET35 assembly
LLBLGen Pro v2.6 ships with full Linq support on .NET 3.5 by using a Linq provider . This provider is defined in the assembly SD.LLBLGen.Pro.LinqSupportClasses.NET35.dll . When you generate code for .NET 3.5/VS.NET 2008, the generated code requires a reference to this assembly in the SelfServicing generated VS.NET project and in the Adapter DBGeneric generated VS.NET project.

System.EnterpriseServices assembly
This is an assembly of .NET and is necessary to compile the code since the generated code references COM+ specific features in the COM+ transaction variant and the DbUtils for COM+ (SelfServicing), or the ComPlusAdapterContext class (Adapter). This assembly is required in SelfServicing projects and in the DatabaseSpecific project of Adapter.

(Optional) The .NET data provider of the database type used.
These assemblies (if applicable) have to be referenced in the Selfservicing projects and the DatabaseSpecific project of Adapter. For SqlServer , this provider is already included when you reference System.Data, the same goes for MS Access (OleDb), so you don't have to reference another assembly in this case. For SqlServer CE Desktop , you've to reference the SqlServerCE client dll, available in the CE Desktop SDK. For Oracle using ODP.NET, you need to have the latest ODP.NET installed for your Oracle version and reference its Oracle.DataAccess assembly, if you created the project using the Oracle for ODP.NET driver. ODP.NET 10g specific: It can be your ODP.NET version is older than the one we used to build the Oracle ODP.NET DQE. If that's the case, please change the reference to the proper Oracle.DataAccess assembly. For Oracle using the Microsoft Oracle provider (available in .NET 1.1 and .NET 2.0+), you have to reference the System.Data.OracleClient assembly, shipped with .NET. For Firebird , you need to have the Firebird.NET provider installed and use the latest final version of that provider and you have to reference FirebirdSql.Data.Firebird.dll when using .NET 1.x and FirebirdSql.Data.FirebirdClient.dll when using .NET 2.0 For IBM DB2 UDB , you need to have the IBM DB2 .NET provider installed and you need to reference IBM.Data.DB2.dll, shipped with the latest ClientAccess software for DB2 or with the DB2 UDB personal edition v8.1.x or higher. It can be your IBM.Data.DB2.dll version is older than the one we used to build the DB2 DQE. If that's the case please change the reference to the proper IBM.Data.DB2 assembly. For MySql , you need to have the CoreLab MySqlDirect.NET provider installed and you need to reference CoreLab.MySql.dll and CoreLab.Data.dll when using v4 of MySqlDirect, or just CoreLab.MySql.dll if using v3.xx of MySqlDirect. For PostgreSql , you need to have the latest Npgsql provider installed and you have to reference the Npgsql assembly. For Sybase ASE , you need to have the latest Sybase ASE client installed and you have to reference Sybase.Data.AseClient.dll For Sybase ASA , you need to have the latest Sybase ASA client installed and you need to reference

Page 157

iAnywhere.Data.SQLAnywhere.dll The DQE assemblies are compiled against a given version of the database specific ADO.NET provider. We try to keep this at the same version throughout an LLBLGen Pro version so users don't have to upgrade an ADO.NET provider if we ship a bugfixed set of runtime libraries. However, it can be you use a newer version of the ADO.NET provider of your database and this particular version for example doesn't work with the DQE because of assembly version mismatches. For these occasions we provide additional builds of our runtimes against different versions of the providers. These are located in general in subfolders in the RuntimeLibraries\DotNetxy \ folder.

(Optional) Type converter assemblies
If you're using a type converter in your project, you have to add a reference to the type converter assembly which contains the type converter(s) used, for example the SD.LLBLGen.Pro.TypeConverters.dll assembly in the TypeConverters folder in the LLBLGen Pro installation folder. In selfservicing, you reference this assembly in the single generated code project, in adapter you reference this assembly in the database specific project.

Compiling on the command line
If you want to compile the code from the command line, you have to follow similar steps. Be sure to specify the complete reference paths to the assemblies you have to reference. The VB.NET and C# compiler both have an option to recurse through folders, so you can include all generated source files in the build by specifying the option /recurse:*.cs for C# or /recurse:*.vb for VB.NET.

Using the compiled assembly/ies
If compilation of the generated code was successful, you can reference the compiled dll from your project, which holds the code using the generated classes. You have to reference the following assemblies in your project, and add Imports / using statements with the specified namespaces at the top of the code files which contain types defined in the generated code or in the ORM support classes library. SD.LLBLGen.Pro.ORMSupportClasses . See the remark about which version to select in the list above. Include Imports/using statements with the namespace SD.LLBLGen.Pro.ORMSupportClasses. System.EnterpriseServices . Optional. If you use ComPlus transaction functionality, you have to reference this assembly in your own code where you use the ComPlus transaction, because your class has to derive from ServicedComponent which is located in this assembly. If you do not use ComPlus transactions explicitly, you don't need a reference to this assembly. Include Imports/using statements with the namespace System.EnterpriseServices. Furthermore, you have to copy the generated app.config file to the executable project which references (indirectly perhaps via another assembly) the generated code. If you are developing a web project, you have to copy the appSettings tag and its contents of generated app.config file to the web.config file of your application, inside the configuration tag. This will make sure the generated code will be able to read the connection string. .NET 2.0 or higher: Starting with .NET 2.0, a .config file (app.config or web.config) can have a separate connection strings section in which you can store the connection string as well. This is supported by LLBLGen Pro. A connection string specified in the connection strings section in the config file is read first. If a connection string with the name specified in the generated code is found there, that connection string is used. If it's not found, the appSettings section is consulted.

Requesting the Runtime libraries buildnumber and version number
You can request the version of the runtime libraries you're currently using in your code using:
C# VB.NET

// C# string version = SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Version + "." + SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Build; ' VB.NET Dim version As String = SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Version &

Page 158

"." & _ SD.LLBLGen.Pro.ORMSupportClasses.RuntimeLibraryVersion.Build The runtime libraries also use a file version attribute, which is visible when you rightclick in windows explorer on the assembly dll (one of the DQE assemblies or the ORM Support classes assembly) and select 'Properties' and after that view the Version tab. This version has the following format: 2.6.08.mmdd, where mmdd is the date the assembly was released (mm for month, dd for day)
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 159

Generated code - Database specific features
Preface
This small section illustrates the different specific features which are available to you through configuration, either through the .config file of your application or through code. This section is more or less an aggregation of what's been discussed elsewhere as well so you won't miss a detail which could be of great benefit in your project. Also be sure to check the Application configuration through .config files for more details about features like catalog- and schema-name overwriting.

SqlServer specific: NEWSEQUENTIALID() support
When you're using unique_identifier types for primary keys on SqlServer 2005, you can benefit from the new feature of SqlServer 2005 called NEWSEQUENTIALID(). This feature allows you to auto-generate new GUIDs for your primary keys which are sequential, so they are friendly for clustered indexes. To use this feature in LLBLGen Pro, you've to specify as the default for the primary key field in the table definition: NEWSEQUENTIALID(). Furthermore, you shouldn't set the PK field to a new GUID value when you're saving the entity. Before you save the entity set the SqlServer Dynamic Query Engine (DQE) in the SqlServer 2005 compatibility mode (see below). The DQE will then figure out to let the database insert the NEWSEQUENTIALID() produced value and it's automatically retrieved for you into the entity's PK.

SqlServer specific: compatibility mode
With the introduction of SqlServer 2005, it became necessary to signal the SqlServer DQE that it should use SqlServer 2005 specific features. This was a more appropriate step than to use a new codebase with solely SqlServer 2005 features as that would make running your code on SqlServer 2005 when it was first created for SqlServer 2000 a bit problematic. You can set the SqlServer DQE's compatibility mode in two different ways: using the application's .config file or use a code statement. For the application's .config file method, please see Generated code - Application configuration through .config files. For using the code method, please see for SelfServicing: Generated code - DbUtils functionality and for Adapter: Generated code - DataAccessAdapter functionality.

SqlServer specific: ArithAbort support
When you're using indexed views in your database, and you're inserting data into tables which are used in these indexed views, you'll run into the problem that you've to set ARITHABORT ON before the particular insert statement is executed. To signal that the SqlServer DQE has to emit the ARITHABORT statement prior to an insert statement, you can use the ArithAbort flag implemented on the DbUtils class (SelfServicing) or DataAccessAdapter class (Adapter). Please see for SelfServicing: Generated code - DbUtils functionality and for Adapter: Generated code - DataAccessAdapter functionality.

SqlServer specific: User Defined Types support
SqlServer 2005 supports User Defined Types (UDTs) written in a CLR language like C# or VB.NET. The SqlServer driver can read these fields and if you're using UDTs in your tables, the fields which have a UDT as their type will be read by the driver and their UDT type is considered their valid type. Entities mapped onto these tables (or views) have then fields which .NET type is equal to the UDT of the target field in the table/view they're mapped on. The generated entity classes will have properties inside them which refer to the UDT type as the type of the property, as the UDT is a normal CLR type. In such a situation you've to reference the assembly which contains the UDT in your generated code Visual Studio.NET project. (For Adapter, the database generic project). Usage of the field in .NET code is like any other code: you can set the field to an instance of the UDT type and normally save it and load it.

SqlServer specific: SqlServer CE Desktop support
LLBLGen Pro v2.6 supports SqlServer CE Desktop v3.1 or higher, on .NET 2.0 or higher. SqlServer CE Desktop is the win32 runnable version of the same database known from the compact framework, SqlServer

Page 160

CE 3.0. SqlServer CE Desktop is SqlServer CE v3.1 or higher, but embeds roughly the same features as SqlServer CE 3.0 or higher for the compact framework: no stored procedures, a single schema and no meta-data retrieval. It's recommended that you use the latest CE Desktop version, v3.5, as it contains more features. To be able to target SqlServer CE Desktop, you first has to create a SqlServer project, similar to what you have to do for CF.NET support. Then, you have to select .NET 2.0 or higher as the target platform at tab 1 on the Generator Configuration dialog. CF.NET 1.0/2.0 or .NET 1.x aren’t supported for SqlServer CE Desktop. Stored procedures aren’t supported, although they might be generated into the generated code. LLBLGen Pro uses the normal SqlServer DQE assembly for query production for CE Desktop. This is a change from v2.5 where a special DQE was shipped. You also have to specify the compatibility level for the DQE, to signal it that it has to generate queries for SqlServer CE. For more information about this compatibility level, please see Generated code - Application configuration through .config files. When loading the generated VS.NET projects, the references to the SqlServer DQE for .NET 2.0 should be checked. If it's not correct, please correct the reference. To be able to connect to a SqlServerCE desktop database, one has to adjust the connection string, as this connection string is the one used to connect to the SqlServer catalog from which the LLBLGen Pro project was created. It has to have the format shown by the following example: <add key="Main.ConnectionString" value="data source=c:\pathtodb\Northwind.sdf;"/>

As SqlServer CE Desktop doesn’t support multiple catalogs nor multiple schemas, these features aren’t available. Also COM+/System.Transactions transactions aren’t supported. All other LLBLGen Pro native features, like dependency injection, validation, authorization etc. are supported. Linq is supported with SqlServer CE Desktop v3.5 or higher, however there are limitations in SqlServer CE Desktop which make it a bit of a struggle. For example the lack of scalar query support can lead to a lot of errors at runtime because a scalar query in a projection or WHERE clause isn't supported by SqlServer CE Desktop.

SqlServerCe provider registration
It's no longer necessary to reference System.Data.SqlServerCe.dll, however on the machine the application is ran which uses compatibility level 3 or 4, this dll has to be installed as documented in the SqlServer CE Desktop documentation about deployment: via the .msi shipped with SqlServer CE Desktop. If you can’t run this .msi installer, be sure your application’s .config file contains the appropriate provider registration for the DbProviderFactory. (this information is installed in the machine.config file by the .msi installer of SqlServer CE Desktop). More details about this are available in the SqlServer CE Desktop documentation (the 'Books online' documents of SqlServer CE Desktop)

Oracle specific: Ansi joins
Among the Oracle versions supported is Oracle 8i. Oracle 8i doesn't support LEFT/INNER/RIGHT OUTER JOIN syntax, or better: ansi join syntaxis, and requires syntaxis like SELECT .. FROM A, B, C. There are three Oracle DQE's: one for ODP.NET for 8i/9i, one for ODP.NET for 10g and one for the MS Oracle provider for 8i/9i/10g. The DQE for ODP.NET and 10g is pre-configured to use ansi join syntax, the other two aren't, they'll default to non-ansi syntaxis as supported by Oracle 8i. To switch these DQEs to use ansi-joins, a setting in the application's .config file has to be added to make the DQE use ansi joins instead. Please see: Generated code - Application configuration through .config files for the details.

Oracle / Firebird specific: Trigger based sequence values
It can be that your project's Oracle or Firebird database schema is used by multiple applications, among them your LLBLGen Pro based software. This can give the situation that you've to deal with the situation that the schema is configured to use triggers to insert sequence values on row insert. To tell the Oracle DQE or Firebird DQE of choice that this is the case, and thus that it shouldn't ask for a new sequence value when a new entity is inserted, you've to add a setting to the application's .config file. Please see: Generated code - Application configuration through .config files for the details.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 161

Page 162

Using the generated code, Adapter
Preface
This section describes various aspects of the generated code specific to the Adapter template group. You can generate code using Adapter templates by choosing the template group 'Adapter' in the generator configuration dialog. See: Designer - Generating code Adapter generates two Visual Studio.NET projects into two specific directories: a database generic project and a database specific project. If you want to work with data in the database (read/write data) you need a reference to a database specific project which is generated from the same set of Entities, this means: any database generic project can be used with any database specific project as long as the database specific project is generated from entities/typed views/typed lists which have the same names for entities/typed lists/typed views/ and fields. This way you can create software which targets a single database generic project and two or more database specific projects, one for each database type (e.g. Oracle or SqlServer). To establish this, you have to construct tables with the same names and with fields with the same names as the tables/fields in the other database, then create two LLBLGen Pro projects and generate code using the Adapter template group, for both LLBLGen Pro projects. You can then use one of the two generated database generic projects in your application and both database specific projects. The DataAccessAdapter class is located, together with the stored procedure call classes, in the database specific project, all other code is located in the database generic project. This means that when you want to use solely an entity class, you can just reference the database generic project and use the entity object. If you need to interact with the persistent storage, you need a reference to a database specific project which knows the configuration of the entity classes in your database generic project. Typically, you generate both projects at once, then use the database specific project in your lower tier of your application and you reference solely the database generic project in the upper tiers of your application, for example the GUI. Stored Procedure calls are located in the database specific project, this is by design. Adapter code is not compatible with SelfServicing code. You can however use both in the same application and you can share validation classes.

Compilation of the generated VS.NET projects
To compile the two VS.NET projects, do as you normally would, load the project into VS.NET and compile it (build it). The output folders for the projects are located in the specific folders per project (DatabaseGeneric\bin... and DatabaseSpecific\bin...). This is done to prevent locking of referenced assemblies when you compile these projects in the same solution.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 163

Generated code - DataAccessAdapter Functionality, Adapter
Preface
Using the Adapter template group which ships with LLBLGen Pro, you'll notice that there will be two VS.NET projects generated. This section describes the DataAccessAdapter class which is located in the database specific project and which is the class that performs all database activity for entities, typed lists, typed views and stored procedure calls: the DataAccessAdapter object provides the persistence service to the developer. For successful usage of the Adapter template group, it's important to understand which functionality is offered by the DataAccessAdapter class and this section shows a brief overview of that functionality. Indepth discussions of various aspects of the functionality can be found in the remainder of the Adapter documentation.

Functionality
The DataAccessAdapter class is the single class you'll need to interact with the database to fill entities, store changed data, call a stored procedure, start a transaction etc. Below is a brief overview of the various aspects of the DataAccessAdapter class.

Note : The DataAccessAdapter class is not thread safe and should not be used as such. Each thread should use its own instance, it's not safe to share a DataAccessAdapter instance among multiple threads.

Persistence Info
The entity classes, typed list classes and typed view classes do not contain any persistence information. When you want to read an entity from the database into an entity class, the DataAccessAdapter consults a class called PersistenceInfoProvider, which produces based on field/object name, the correct persistence information for the DataAccessAdapter. This is done behind the scenes, so a developer will not notice this. The PersistenceInfoProvider uses a caching mechanism to supply the information as quickly as possible. Because it is a separate class, you are free to modify this class to retrieve the information from another source than the generated code, for example from an XML file or database.

Connection strings
The DataAccessAdapter reads the connection string automatically from the *.config file available, however it also accepts a connection string if you supply one. This means that you can target different databases under different users on a per-call basis.

Catalog specific persistence info (SqlServer, Sybase ASE)
Adapter uses catalog specific persistence information. This means that the catalog name is generated into the persistence information. the result of this is that although the connection string might contain a different catalog name, the queries will use the catalog name generated into the persistence information for each field and database object (table/view). This can be very helpful in the scenario with multiple catalogs per project, but might not be what you want in some situations. You can overwrite the catalog name to be used on a per-call basis by specifying catalog name overwriting information to the DataAccessAdapter's constructor or set some properties later on. Below is an overview of the options available to you: Single name setting (provided for backwards compatibility, not recommended). This option lets you specify a single new name for the catalog to use, or clear the catalog names altogether. It affects all catalog names known in the project, and therefore not very flexible. To use this option, use either the

Page 164

constructor of the DataAccessAdapter class which accepts CatalogNameToUse and CatalogNameUsageSetting or set these properties of the DataAccessAdapter class after instantiation of that class. CatalogNameToUse is important for the CatalogNameUsageSetting.ForceName as it will become the new name to use as catalog name. Multi name setting Preferred way of performing catalog name overwriting. To do this, create a new CatalogNameOverwriteHashtable (type provided by the SD.LLBLGen.Pro.ORMSupportclasses assembly). You can specify a CatalogNameUsageSetting, but this isn't required. You then add key-value pairs, where the key is the name to overwrite and the value is the name to overwrite it with. If you specify '*' as key, all catalog names will be set to the name specified as value, if CatalogNameUsageSetting is set to ForceName. Please see the LLBLGen Pro reference manual for more details on this object. The created CatalogNameOverwriteHashtable is then passable to the DataAccessAdapter constructor or you can set the CatalogNameOverwrites properties to an instance of this special hashtable. You can also specify extra appSettings add -tags in your application's .config file's appSettings tag to set these overwrites. This is provided for backwards compatibility, and not recommended. Add an add-tag with CatalogNameUsageSetting as value for the key and for the value one of the following: "0" (default), "1" (forceName) or "2" (clear), and an add-tag CatalogNameToUse , which should have as value the catalog name to use for each database call. Example: (which will force a catalog name write on all database calls and will use the name "MyProductionCatalog".)

<configuration> <appSettings> <add key="Main.ConnectionString" value="data source=..."/> <add key="CatalogNameUsageSetting" value="1"/> <add key="CatalogNameToUse" value="MyProductionCatalog" /> ... </appSettings> </configuration>

If you've specified these settings in your application's *.config file (web.config or app.config file (which results in executable name .exe.config)), you can just use the default DataAccessAdapter constructors and with each call these values are read from the config file. If you specify catalogNameToUse and catalogNameUsageSetting in the constructor of the DataAccessAdapter class and you specify for catalogNameUsageSetting something else than CatalogNameUsage.Default, the values specified in the *.config file will be ignored for that particular DataAccessAdapter instance, so it is still possible to override the settings specified in the *.config file on a per-call basis. Be aware that this is provided for backwards compatibility. See Application configuration through .config files for a more flexible solution.

Note : There is another way to overwrite catalog information, which is more efficient, and is required if you have multiple catalogs in your project. Please see Application configuration through .config files for more details.

Schema specific persistence info (DB2, Oracle, PostgreSql, SqlServer, Sybase ASA, Sybase ASE)
Adapter uses schema specific persistence information. This means that the schema name is generated into the persistence information and that the queries will use the schema name generated into the persistence information for each field and database object (table/view). This can be very helpful, but might not be what you want in some situations, like when you've defined global synonyms for tables in a particular schema and you want the generated code to target these synonyms in production. Having the ability to change the schema name at runtime can also be helpful if you want to target multiple schemas with the same schema objects. You can overwrite the schema name to be used on a per-call basis by specifying schema name overwriting information to the DataAccessAdapter's constructor or set some properties later on. This all

Page 165

sounds familiar from the previous catalog name overwriting paragraph, so the schema name overwriting options look the same as for the catalog name ovewriting options. Below is an overview of the options you have: Single name setting (provided for backwards compatibility, not recommended. Only available on Oracle). This option lets you specify a single new name for the schema to use, or clear the schema names altogether. It affects all schema names known in the project, and therefore not very flexible. To use this option, use either the constructor of the DataAccessAdapter class which accepts SchemaNameToUse and SchemaNameUsageSetting or set these properties of the DataAccessAdapter class after instantiation of that class. SchemaNameToUse is important for the SchemaNameUsageSetting.ForceName as it will become the new name to use as schema name. Multi name setting Preferred way of performing schema name overwriting. To do this, create a new SchemaNameOverwriteHashtable (type provided by the SD.LLBLGen.Pro.ORMSupportclasses assembly). You can specify a SchemaNameUsageSetting, but this isn't required. You then add key-value pairs, where the key is the name to overwrite and the value is the name to overwrite it with. If you specify '*' as key, all schema names will be set to the name specified as value, if SchemaNameUsageSetting is set to ForceName. Please see the LLBLGen Pro reference manual for more details on this object. The created SchemaNameOverwriteHashtable is then passable to the DataAccessAdapter constructor or you can set the SchemaNameOverwrites properties to an instance of this special hashtable. You can also specify extra appSettings add -tags in your application's .config file's appSettings tag to set these overwrites (not available on SqlServer, for SqlServer, use the preferred way explained in the Application configuration through .config files section). This is provided for backwards compatibility, and not recommended. Add an add-tag with SchemaNameUsageSetting as value for the key and for the value one of the following: "0" (default), "1" (forceName) or "2" (clear), and an add-tag SchemaNameToUse , which should have as value the schema name to use for each database call. Example: (which will force a schema name write on all database calls and will use the name "MyProductionSchema".)

<configuration> <appSettings> <add key="Main.ConnectionString" value="data source=..."/> <add key="SchemaNameUsageSetting" value="1"/> <add key="SchemaNameToUse" value="MyProductionSchema" /> ... </appSettings> </configuration>

If you've specified these settings in your application's *.config file (web.config or app.config file (which results in executable name .exe.config)), you can just use the default DataAccessAdapter constructors and with each call these values are read from the config file. If you specify schemaNameToUse and schemaNameUsageSetting in the constructor of the DataAccessAdapter class and you specify for schemaNameUsageSetting something else than SchemaNameUsage.Default, the values specified in the *.config file will be ignored for that particular DataAccessAdapter instance, so it is still possible to override the settings specified in the *.config file on a per-call basis. Be aware that this is provided for backwards compatibility. See Application configuration through .config files for a more flexible solution.

Note : There is another way to overwrite schema information, which is more efficient, and is required if you have multiple schemas in your project. Please see Application configuration through .config files for more details.

Command timeouts

Page 166

Sometimes a query can take a long time to complete, for example with data-processing stored procedure calls. With Adapter, you can set the timeout for each query on a per-call basis, using the property DataAccessAdapter.CommandTimeOut . The default is 30 (seconds). Firebird and SqlServer CE don't support command timeouts and a CommandTimeOut value is ignored.

Connection control
It can be useful to open a connection and keep it open for multiple actions and then close it. This can give extra performance, especially in code where multiple database fetches are used in one routine. The property KeepConnectionOpen is used to set this behaviour.

Recursive saves
The DataAccessAdapter object supports recursive saves. This also works with entity collections. The logic automatically determines the order in which actions need to take place. For example: Instantiate a Customer entity, add a new Order object to its Orders collection. Now add OrderDetails objects to the new Order object,. You can simply save the Customer entity and all included new/dirty entities will be saved and any PK-FK relations will be updated/synchronized Alter the Customer object in the example above, and save the Order object. The Customer object is saved first, then the Order and then the OrderDetails objects with all PK-FK values being synced This synchronization of FK-PK values is already done at the moment you set a property to a reference of an entity object, for example myOrder.Customer = myCustomer, if the entity (in this case myCustomer) is not new. Synchronization is also performed after a save action, so identity/sequenced columns are also synchronized.

Fetching/deleting/saving entities/typed lists/typed views
The DataAccessAdapter object offers full support for fetching/deleting/saving entities and entity collections and filling typed lists and typed views. It also supports, as SelfServicing, directly updating/deleting of entities in the persistent storage.

Calling stored procedures
Calling stored procedures is fully supported by the DataAccessAdapter object. The DataAccessAdapter object controls the transactions, so you can now call a stored procedure inside an existing transaction.

Transactions
Adapter fully supports both COM+ transactions and ADO.NET transactions through the ComPlusAdapterContext (for COM+ transactions) and the DataAccessAdapter object (for normal ADO.NET transactions). You can start a transaction using the DataAccessAdapter class and all actions performed after that are executed inside that transaction. It doesn't matter if you fetch collections, typed lists, delete / save entities or call a stored procedure. All multi-entity affecting actions like recursive saves and the save of an entity collection, or the deletion of a set of entities, is, as in SelfServicing, always performed inside a transaction: if an existing transaction is present, that transaction is used, otherwise a new transaction is created and used.

Intercepting activity calls
The DataAccessAdapter class has functionality on board to let you perform actions during various stages of a process, for example right before a save action, or when an entity is fetched. These methods start with 'On' and are defined as virtual and are by definition implemented as an empty method (no-op). Please consult the LLBLGen Pro reference manual and inspect the DataAccessAdapterBase class' members for the details about these methods. DataAccessAdapterBase is the base class for every DataAccessAdapter class. If you want to perform a given action when one of these methods are called, you can override them in the generated DataAccessAdapter class, preferably using the methods discussed in Adding your own code to the generated classes . Please consult the LLBLGen Pro reference manual, available in the LLBLGen Pro installation folder, for details about these methods (DataAccessAdapterBase.On..) when they're called and what is passed in.

ArithAbort flag (SqlServer only)
If an entity is saved into a table which is part of an indexed view, SqlServer requires that SET ARITHABORT ON is specified prior to the actual save action. You can tell LLBLGen Pro to set that option, by calling the method DataAccessAdapter.SetArithAbortFlag (bool) method. After each SQL statement a SET ARITHABORT OFF statement will be executed if the ArithAbort flag is set to true. Setting this flag affects all

Page 167

INSERT statements following the call to SetArithAbortFlag(), until you call that method again.

DQE Compatibility mode(SqlServer only)
With the arrival of SqlServer 2005 and its new features, it was required to make the SqlServer DQE be configurable so it could generate SQL which was optimal for the database type used. To set the compatibility mode of the SqlServer DQE in code, you can use the DataAccessAdapter method SetSqlServerCompatibilityLevel , as shown in the following example which sets the compatibility mode to SqlServer 2005:
C# VB.NET

// C# DataAccessAdapter.SetSqlServerCompatibilityLevel( SqlServerCompatibilityLevel.SqlServer2 005 ); ' VB.NET DataAccessAdapter.SetSqlServerCompatibilityLevel( SqlServerCompatibilityLevel.SqlServer2 005 ) The different compatibility modes are: SqlServerCompatibilityLevel.SqlServer7 (or the value 0) SqlServerCompatibilityLevel.SqlServer2000 (or the value 1) SqlServerCompatibilityLevel.SqlServer2005 (or the value 2) The default is SqlServer2000. The values 0, 1 or 2 have to be used when you're using the .config file parameter. See for more details about that parameter Generated code - Application configuration through .config files . Setting the compatibility level controls the sequence retrieval logic to use by default (@@IDENTITY on Sqlserver 7 or SCOPE_IDENTITY() on 2000/2005), the ability to use NEWSEQUENTIALID() (SqlServer 2005) and the SQL produced for a paging query: a temptable approach is used on SqlServer 7 or 2000, and a CTE approach is used on SqlServer 2005.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 168

Generated code - Using the entity classes, Adapter
Preface
Using the Adapter template group which ships with LLBLGen Pro, you'll notice that there will be two VS.NET projects generated. This section describes code referencing both projects as it needs to interact with the database. The code used in the Adapter section of the documentation uses the General preset, which results in one class per entity. If you want two classes per entity, you've to use the TwoClasses preset, which would result in two classes per entity, one being named MyentityName Entity, the one you'd use in your code like the code in this section. All entity classes derive from a central, generated base class called CommonEntityBase . This class is the base class for all generated entity classes and it derives from the class EntityBase2 , which is located in the ORMSupportClasses assembly. The CommonEntityBase class is usable to add code (via a partial class or using the user code regions) to all generated entities without having to generate / add that code to all entity classes separately.

Instantiating an existing entity
As described in the concepts , an entity is a semantic name for a group of existing data. An entity has a definition, the entity definition, which is formulated in a database table/view and, when you added that entity definition to your project, also has that definition in code, namely in the EntityName EntityBase.cs/vb class. To load the entity's data from the persistent storage, we use the generated class related to this entity's definition, create an instance of that class and order it to load the data of the particular entity via a DataAccessAdapter object. As an example we're loading the entity identified with the customerID "CHOPS" into an object.

Using the primary key value
One way to instantiate the entity in an object is by passing all primary key values to the constructor of the entity class to use:
C# VB.NET

// [C#] DataAccessAdapter adapter = new DataAccessAdapter(); CustomerEntity customer = new CustomerEntity("CHOPS"); adapter.FetchEntity(customer); ' [VB.NET] Dim adapter As New DataAccessAdapter() Dim customer As New CustomerEntity("CHOPS") adapter.FetchEntity(customer) This will load the entity with the primary key value of "CHOPS" into the object named customer , directly from the persistent storage. LLBLGen Pro doesn't use an in-memory cache of objects, to prevent concurrency issues among multiple threads in multiple appdomains (which is the case when you run a client on two or more machines, when you have a webfarm or when your business logic is stored on multiple machines).

Using a related entity
Another way to instantiate this same entity is via a related entity. Adapter however doesn't support automatic data loading when you traverse a relation, all data has to be fetched up-front. A related entity however offers a way to formulate the exact filters to fetch a specific entity very easily. Let's load the order with ID 10254, which is an order of customer "CHOPS", and via that order, load an instance of the entity

Page 169

"CHOPS". The example uses the KeepConnectionOpen feature by passing true to the constructor of the DataAccessAdapter object. The example explicitly closes the connection after the DataAccessAdapter usage is finished.
C# VB.NET

// [C#] DataAccessAdapter adapter = new DataAccessAdapter(true); OrderEntity order = new OrderEntity(10254); adapter.FetchEntity(order); order.Customer = (CustomerEntity)adapter.FetchNewEntity(new CustomerEntityFactory(), order.GetRelationInfoCustomer()); adapter.CloseConnection(); ' [VB.NET] Dim order As New OrderEntity(10254) Dim adapter As New DataAccessAdapter(True) adapter.FetchEntity(order) order.Customer = CType(adapter.FetchNewEntity(New CustomerEntityFactory(), _ order.GetRelationInfoCustomer()), CustomerEntity) adapter.CloseConnection() By setting order.Customer to an instance of CustomerEntity, the logic automatically sets the CustomerID field of order to the CustomerID of the specified CustomerEntity instance. Also the order object will be added to the CustomerEntity instance 'Orders' collection. This means that the following is true after order.Customer = (CustomerEntity)adapter.FetchNewEntity(CustomerEntityFactory(), order.GetRelationInfoCustomer()) : order.CustomerID is equal to order.Customer.CustomerID order.Customer.Orders.Contains(order) is true The logic keeps the two in sync as well. Consider the following situation: a new EmployeeEntity instance employee, which has an autonumber primary key field, and a new OrderEntity instance order. When the following is done: order.Employee = employee;, and the order is saved (or the employee), the field order.EmployeeID is automatically set to the new key field of the employee object after employee is saved. If Customer is in an inheritance hierarchy, the fetch is polymorphic. This means that if the order entity, in this case order 10254, has a reference to a derived type of Customer, for example GoldCustomer, the entity returned will be of type GoldCustomer. See also Polymorphic fetches below.

Using a unique constraint's value
Entities can have other unique identifying attributes and are defined in the database as unique constraints. In addition to the primary key these unique values can be used to load an entity. The customer entity has a unique constraint defined on its field 'CompanyName', so we can use that field to load the same entity that the CHOPS example loaded above. A unique constraint which has the same types of fields as the primary key would result in the same method signature and would not be compileable. Fetching the entity using a unique constraint is done via these steps: first create an empty entity object, set the fields which form the unique constraint to the lookup value, then fetch the entity data using a special method call of the DataAccessAdapter. Because an entity can have more than one unique constraint, you have to specify which unique constraint to use, or better: specify a filter for the unique constraint columns. Entities with unique constraints have methods to construct these filters automatically as shown in the following example. The entity Customer has a unique constraint with one field, CompanyName:
C# VB.NET

// [C#] DataAccessAdapter adapter = new DataAccessAdapter(); CustomerEntity customer = new CustomerEntity(); customer.CompanyName = "Chop-suey Chinese";

Page 170

adapter.FetchEntityUsingUniqueConstraint(customer, customer.ConstructFilterForUCCompanyN ame());

' [VB.NET] Dim adapter As New DataAccessAdapter() Dim customer As New CustomerEntity() customer.CompanyName = "Chop-suey Chinese" adapter.FetchEntityUsingUniqueConstraint(customer, customer.ConstructFilterForUCCompanyN ame())

Using a prefetch path
An easy way to instantiate an entity can be by using a Prefetch Path, to read related entities together with the entity or entities to fetch. See for more information about Prefetch Paths and how to use them: Prefetch Paths .

Using a collection class
The last way to instantiate an entity is by creating a collection class with one or more entities of the same entity definition (entity type, like Customer) using the EntityCollection classes or via a related entity which has a 1:n relation with the entity to instantiate. This method is described in detail in the section about collection classes . You can also see Tutorials and Examples: How Do I? - Read all entities into a collection .

Using a Context object
If you want to get a reference to an entity object already in memory, you can use a Context object , if that object was added to that particular Context object. The example below retrieves a reference to the customer object with PK "CHOPS", if that entity was previously loaded into an entity object which was added to that Context object. If the entity object isn't in the Context object, a new entity object is returned. An example usage is shown below.
C# VB.NET

// C# CustomerEntity customer = (CustomerEntity)myContext.Get(new CustomerEntityFactory(), "CH OPS"); if(customer.IsNew) { // not found in context, fetch from database (assumes 'adapter' is a DataAccessAdapter instance) adapter.FetchEntity(customer); } ' VB.NET Dim customer As CustomerEntity = CType(myContext.Get(New CustomerEntityFactory(), "CHOPS "), CustomerEntity) If customer.IsNew Then ' not found in context, fetch from database (assumes 'adapter' is a DataAccessAdapter i nstance) adapter.FetchEntity(customer) End If

Polymorphic fetches
Already mentioned early in this section is the phenomenon called 'Polymorphic fetches'. Imagine the following entity setup: BoardMember entity has a relation (m:1) with CompanyCar. CompanyCar is the root of a TargetPerEntityHierarchy inheritance hierarchy and has two subtypes: FamilyCar and SportsCar. Because BoardMember has the relation with CompanyCar, a field called 'CompanyCar' is created in the BoardMember entity which is mapped onto the m:1 relation BoardMember - CompanyCar. In the database, several BoardMember instances have been stored, as well as several different CompanyCar

Page 171

instances, of type FamilyCar or SportsCar. Using DataAccessAdapter.FetchNewEntity, you can load the related CompanyCar instance of a given BoardMember's instance by using the following code:
C# VB.NET

// C# CompanyCarEntity car = adapter.FetchNewEntity(new CompanyCarEntityFactory(), myBoardMember.GetRelationInfoCompanyCar()); ' VB.NET Dim car As CompanyCarEntity = adapter.FetchNewEntity(New CompanyCarEntityFactory(), _ myBoardMember.GetRelationInfoCompanyCar()) However, 'car' in the example above, can be of a different type. If for example the BoardMember instance in myBoardMember has a FamilyCar as company car set, 'car' is of type FamilyCar. Because the fetch action can result in multiple types, the fetch is called polymorphic . So, in our example, if 'car' is of type FamilyCar, the following code would also be correct:
C# VB.NET

// C# FamilyCarEntity car = (FamilyCarEntity)adapter.FetchNewEntity(new CompanyCarEntityFactor y(), myBoardMember.GetRelationInfoCompanyCar()); ' VB.NET Dim car As FamilyCarEntity = CType(adapter.FetchNewEntity(New CompanyCarEntityFactory(), _ myBoardMember.GetRelationInfoCompanyCar()), FamilyCarEntity) Would this BoardMember instance have a SportsCar set as company car, this code would fail at runtime with a specified cast not valid exception.

DataAccessAdapter.FetchEntity and hierarchial entities
DataAccessAdapter.FetchEntity() is not polymorphic. This is by design as it fetches the entity data into the passed in entity object. As it's already an instance, it would be impossible to change that instance' type to a derived type if the PK values identify an entity which is of a subtype of the type of the passed in entity instance. In our previous example about BoardMember and CompanyCar, BoardMember is a derived type of Manager which is a derived type of Employee. While this might not be the best OO hierarchy thinkable, it's enough to illustrate the point: if FetchEntity is called by passing in an Employee instance, and the PK identifies a BoardMember, only the Employee's fields are loaded, however if the entity is in a hierarchy of type TargetPerEntity, LLBLGen Pro will perform joins with all subtypes from the supertype, to make sure a type is stored OK.

Note : Be aware of the fact that polymorphic fetches of entities in a TargetPerEntity hierarchy (see Concepts - Entity inheritance and relational models. ) use JOINs between the root entity's target and all subtype targets when the root type is specified for the fetch. This can have an inpact on performance.

Creating a new / modifying an existing entity
Loading an entity is nice, but it has to be created before it can be loaded. To create a new entity, simply instantiate an empty entity object, in this case a new Customer:
C#

Page 172

VB.NET

// [C#] CustomerEntity customer = new CustomerEntity(); ' [VB.NET] Dim customer As New CustomerEntity() To create the entity in the persistent storage, two things have to be done: 1) the entity's data (which is new) has to be stored in the new entity object and 2) the entity data has to be persisted / saved in the persistent storage. Let's add the customer Foo Inc. to the database:
C# VB.NET

// [C#] customer.CustomerID = "FOO"; customer.Address = "1, Bar drive"; customer.City = "Silicon Valey"; customer.CompanyName = "Foo Inc."; customer.ContactName = "John Coder"; customer.ContactTitle = "Owner"; customer.Country = "USA"; customer.Fax = "(604)555-1233"; customer.Phone_Number = "(604)555-1234"; customer.PostalCode = "90211"; // save it. We require an adapter for this DataAccessAdapter adapter = new DataAccessAdapter(); adapter.SaveEntity(customer, true); ' [VB.NET] customer.CustomerID = "FOO" customer.Address = "1, Bar drive" customer.City = "Silicon Valey" customer.CompanyName = "Foo Inc." customer.ContactName = "John Coder" customer.ContactTitle = "Owner" customer.Country = "USA" customer.Fax = "(604)555-1233" customer.Phone_Number = "(604)555-1234" customer.PostalCode = "90211" ' save it. We require an adapter for this Dim adapter As New DataAccessAdapter() adapter.SaveEntity(customer, True) Region isn't filled in, which is fine, it can be NULL, and will therefore also end up as NULL in the database. This will save the data directly to the persistent storage (database) and the entity is immediately available for other threads / appdomains targeting the same database, because we've specified that it should be refetched right after the SaveEntity() action succeeds. The entity object customer itself is marked 'out of sync'. This means that the entity’s data has to be refetched from the database prior to reading from one of the entities properties. SelfServicing will handle this automatically but with Adapter you must refetch manually using an adapter object. In our example we specified true in the SaveEntity call, this automatically refetches the entity for us. If you do not require the saved entity for any further processing, you don't need to refetch it and you can save yourself a roundtrip by simply omitting the 'true' in the SaveEntity() call. The code is aware of sequences / identity columns and will automatically set the value for an identity / sequence column after the entity is physically saved inside SaveEntity(). The new value for sequenced columns is available to you after SaveEntity(), even though you haven't specified that the entity has to be refetched. This can be helpful if you want to refetch the entity later. If you're using a database which uses sequences, like Oracle or Firebird, be sure to define the field which should be used with a sequence as identity in the entity editor . Because the entity saved is new (customer.IsNew is true), SaveEntity() will use an INSERT query. After a successful save, the IsNew flag is set to false and the State property of the Fields object of the saved entity is set to EntityState.Fetched (if the entity is also refetched) or

Page 173

EntityState.OutOfSync .

Note : Fields which get their values from a trigger, from newid() or a default constraint calling a user defined function are not considered sequenced fields and these values will not be read back, so you'll have to supply a value for these fields prior to saving the entity. This isn't true for fields which are of type unique_identifier on SqlServer 2005 when the DQE is set in SqlServer 2005 compatibility mode and the field has in the database a default value of NEWSEQUENTIALID(). See: Generated code Database specific features

Note : If the entity is in a hierarchy of type TargetPerEntityHierarchy (see Concepts - Entity inheritance and relational models ) you don't have to set the discriminator value for the entity type, this is done for you automatically: just create a new instance of the entity type you want to use, and the discriminator value is automatically set and will be saved with the entity.

Modifying an entity
Modifying an entity's data is just as simple and can be done in multiple ways: 1. Loading an existing entity in memory, alter one or more fields (not sequenced fields) and call a DataAccessAdapter object's SaveEntity() method 2. Create a new entity, set the primary key values (used for filtering), set the IsNew to false, set one or more other fields' values and call a DataAccessAdapter object's SaveEntity() method. This will not alter the PK fields. 3. Via the DataAccessAdapter's UpdateEntitiesDirectly() method, specifying the primary key fields as the filter. Option 1 is likely the most used one, since an entity might already be in memory. As with all the suggested options, the DataAccessAdapter's SaveEntity() method will see that the entity isn't new, and will therefore use an UPDATE query to alter the entity's data in the persistent storage. An UPDATE query will only update the changed fields in an entity that is saved, which results in efficient queries. If no fields are changed, no update is performed. If you've loaded an entity from the database into memory and you've changed one or more of its primary key fields, these fields will be updated in the database as well (except sequenced columns). Changing PK fields is not recommended and changed PK fields are not propagated to related entities fetched in memory. An example for code using the first method: (it keeps the connection open for performance)
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); DataAccessAdapter adapter = new DataAccessAdapter(true); adapter.FetchEntity(customer); customer.Phone = "(605)555-4321"; adapter.SaveEntity(customer); adapter.CloseConnection(); ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") Dim adapter As New DataAccessAdapter(True) adapter.FetchEntity(customer) customer.Phone = "(605)555-4321" adapter.SaveEntity(customer)

Page 174

adapter.CloseConnection() This will first load the Customer entity "CHOPS" into memory, alter one field, Phone, and then save that single field back into the persistent storage. Because the loading of "CHOPS" already set the primary key, we can just alter a field and call SaveEntity() . The Update query will solely set the table field related to the entity field "Phone". Reading an entity into memory first can be somewhat inefficient, since all we need to do is an update of an entity row in the database. Option 2 is more efficient in that it just starts an update, without first reading the data from the database. The following code performs the same update as the previous example code illustrating option 1. Because it doesn't need to read an entity first, it doesn't have to keep the connection open. Even though the PK field is changed, it is not updated, because it is not previously fetched from the database.
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity(); customer.CustomerID="CHOPS"; customer.IsNew=false; customer.Phone = "(605)555-4321"; DataAccessAdapter adapter = new DataAccessAdapter(); adapter.SaveEntity(customer); ' [VB.NET] Dim customer As New CustomerEntity() customer.CustomerID = "CHOPS" customer.IsNew = False customer.Phone = "(605)555-4321" Dim adapter As New DataAccessAdapter() adapter.SaveEntity(customer) We have to set the primary key field, so the Update method will only update a single entity, the "CHOPS" entity. Next, we have to mark the new, empty entity object as not being new, so SaveEntity() will use an UPDATE query, instead of an INSERT query. This is done by setting the flag IsNew to false. After that comes the altering of a field, in this case "Phone", and the call of SaveEntity(). This will not load the entity back in memory. If you want that, specify 'true' with the SaveEntity() call. Doing updates this way can be very efficient and you can use very complex update constructs when you apply an Expression to the field(s) to update. See for more information about Expression objects for fields Field expressions and aggregates .

Notes : This same mechanism will work for fast deletes. If you want to set an identity primary key column, you'll notice you can't do that because it is marked as read-only. Use the method entityObject.Fields[fieldindex or fieldname].ForcedCurrentValueWrite(value) . See the reference manual for details about this method (EntityField2.ForcedCurrentValueWrite). Setting a field to the same value it already has will not set the field to a value (and will not mark the field as 'changed') unless the entity is new. Recursive saves are performed by default. This means that the DataAccessAdapter SaveEntity() logic will check whether included entities also have to be saved. In our examples above, this is not the case, but in your own code it can be. If you do not want this, you can specify 'false' for recursive saves in an overload of SaveEntity() in which case only the specified entity will be saved. Each entity which is saved is validated prior to the save action. This validation can be a no-op, if no validation code has been added by the developer, either through code added to the entity, or through a validator class . See Validation per field or per entity for more information about LLBLGen Pro's validation functionality. (SQLServer specific) If the entity is saved into a table which is part of an indexed view, SqlServer requires that SET ARITHABORT ON is specified prior to the actual save action. You can

Page 175

tell the DataAccessAdapter class to set that option, by calling the SetArithAbortFlag(bool) method. After each SQL statement a SET ARITHABORT OFF statement will be executed if the ArithAbort flag is set to true. Setting this flag affects the whole application. Option 3 is implemented directly in the DataAccessAdapter object. The DataAccessAdapter allows you to manipulate an entity or group of entities directly in the database without first fetching them into memory. This can be much faster than the conventional method described in option 1 or 2. Below we're setting all 'Discontinued' fields of all product entities to false if the CategoryId of the product is equal to 3. UpdateEntitiesDirectly() (as well as its look-alike method DeleteEntitiesDirectly, which deletes entities directly from the persistent storage) returns the number of entities affected by the call, or -1 if rowcounting is switched off inside the database system (SqlServer)
C# VB.NET

// [C#] RelationPredicateBucket bucket = new RelationPredicateBucket(); bucket.PredicateExpression.Add(ProductFields.CategoryId == 3); ProductEntity updateValuesProduct = new ProductEntity(); updateValuesProduct.Discontinued=true; DataAccessAdapter adapter = new DataAccessAdapter(); int amountUpdated = adapter.UpdateEntitiesDirectly(updateValuesProduct, bucket); ' [VB.NET] Dim bucket As New RelationPredicateBucket() bucket.PredicateExpression.Add(New FieldCompareValuePredicate(ProductFields.CategoryId, Nothing, ComparisonOperator.GreaterEqual, 3)) Dim updateValuesProduct As New ProductEntity() updateValuesProduct.Discontinued = True Dim adapter As New DataAccessAdapter() Dim amountUpdated As Integer = adapter.UpdateEntitiesDirectly(updateValuesProduct, bucke t)

Setting the EntityState to Fetched automatically after a save
By design an entity which was successfully saved to the database gets as EntityState OutOfSync . If you've specified to refetch the entity again after the save, the entity is then refetched with an additional fetch statement. This is done to make sure that default constraints, calculated fields and elements which could have been changed after the save action inside the database (for example because a database trigger ran after the save action) are reflected in the entity after the save action. If you know that this won't happen in your application, you can get a performance gain by specifying that LLBLGen Pro should mark a successfully saved entity as Fetched instead of OutOfSync. In this situation, LLBLGen Pro won't perform a fetch action to obtain the new entity values from the database. To use this feature, you've to set the static/Shared property EntityBase2.MarkSavedEntitiesAsFetched to true (default is false). This will be used for all entities in your application, so if you have some entities which have to be fetched after the update (for example because they have a timestamp field), you should keep the default, false. You can also set this value using the config file of your application by adding the following line to the appSettings section of your application's config file: <add key="markSavedEntitiesAsFetched" value="true"/> You don't need to refetch an entity if it has a sequenced primary key (Identity or sequence), as these values are read back directly with the insert statement.

FK-PK synchronization
Foreign key synchronization with a related Primary key field is done automatically in code. For example: Instantiate a Customer entity, add a new Order object to its Orders collection. Now add OrderDetails

Page 176

objects to the new Order object's OrderDetails collection,. You can simply save the Customer entity and all included new/'dirty' entities will be saved and any PK-FK relations will be updated/synchronized automatically. Alter the Customer object in the example above, and save the Order object. The Customer object is saved first, then the Order and then the OrderDetails objects with all PK-FK values being synced This synchronization of FK-PK values is already done at the moment you set a property to a reference of an entity object, for example myOrder.Customer = myCustomer, if the entity (in this case myCustomer) is not new, or if the PK field(s) aren't sequenced fields when the entity is new. Synchronization is also performed after a save action, so identity/sequenced columns are also synchronized.

If you set a foreign key field (for example Order.CustomerID) to a new value, the referenced entity by the foreign key (relation) the field is part of will be dereferenced and the field mapped onto that relation is set to null (C#) or Nothing (VB.NET). Example:
C# VB.NET

// C# OrderEntity myOrder = new OrderEntity(); CustomerEntity myCustomer = new CustomerEntity("CHOPS"); adapter.FetchEntity(myCustomer); myOrder.Customer = myCustomer; // A myOrder.CustomerID = "BLONP"; // B CustomerEntity referencedCustomer = myOrder.Customer; // C ' VB.NET Dim myOrder As New OrderEntity() Dim myCustomer As New CustomerEntity("CHOPS") adapter.FetchEntity(myCustomer) myOrder.Customer = myCustomer ' A myOrder.CustomerID = "BLONP" ' B Dim referencedCustomer As CustomerEntity = myOrder.Customer ' C After line 'A', myOrder.CustomerID will be set to "CHOPS", because of the synchronization between the PK of Customer and the FK of Order. At line 'B', the foreign key field CustomerID of Order is changed to a new value, "BLONP". Because the FK field changes, the referenced entity through that FK field, Customer, is dereferenced and myOrder.Customer will return null/Nothing. Because there is no current referenced customer entity, the variable referencedCustomer will be set to null / Nothing at line 'C'. The opposite is also true: if you set the property which represents a related entity to null (Nothing), the FK field(s) forming this relation will be set to null as well, as shown in the following example:
C# VB.NET

// C# PrefetchPath2 path = new PrefetchPath2((int)EntityType.OrderEntity); path.Add(OrderEntity.PrefetchPathCustomer); OrderEntity myOrder = new OrderEntity(10254); adapter.FetchEntity(myOrder, path); // A myOrder.Customer = null; // B ' VB.NET Dim path As New PrefetchPath2((int)EntityType.OrderEntity) path.Add(OrderEntity.PrefetchPathCustomer) Dim myOrder As New OrderEntity(10254) adapter.FetchEntity(myOrder, path) ' A myOrder.Customer = Nothing 'B At line A, the prefetch path loads the related Customer entity together with the Order entity 10254. At line B, this customer is dereferenced. This means that the FK field of order creating this relation,

Page 177

myOrder.CustomerId, will be set to null (Nothing). So if myOrder is saved after this, NULL will be saved in the field Order.CustomerId

Deleting an entity
Deleting an entity is as simple as Saving an entity. Create a new entity instance into memory, set the PK field values and call the DataAccessAdapter's method DeleteEntity(). You can also delete an entity using an entity collection (using the DataAccessAdapter's method DeleteEntityCollection ) or remove it from the persistent storage directly (using the DataAccessAdapter's method DeleteEntitiesDirectly ) To delete it the simple way: create the new entity object, set the PK field value and call DeleteEntity. We keep the connection open. (Instead of using a new entity, you can also pass an existing entity object to DeleteEntity())
C# VB.NET

// [C#] DataAccessAdapter adapter = new DataAccessAdapter(true); CustomerEntity customer = new CustomerEntity("CHOPS"); adapter.DeleteEntity(customer); adapter.CloseConnection(); ' [VB.NET] Dim adapter As New DataAccessAdapter(True) Dim customer As New CustomerEntity("CHOPS") adapter.DeleteEntity(customer) adapter.CloseConnection() It's wise to start a transaction if you want to be able to roll back the delete later on in your routine. For more information about transactions, see the section about Transactions .

Note : Deletes are never recursive. This means that if the delete action of an entity violates a foreign key constraint, an exception is thrown.

Entity state in distributed systems
In distributed environments, you work disconnected: the client holds data and doesn't have a connection with the server for manipulating the data in the client process, it only contacts the service for persistence and data retrieval from the database. To understand the state of an entity object the following explanation could help. Think in these steps: 1. 2. 3. 4. 5. 6. Create containers (entity objects) Add data to/load data in containers (from server for example) Show data in modifiers (forms) Data is modified and collected for persistence Collected data is send to server for persistence Process is ended

After step 6) the state should be considered void. It's up to you to ignore that and keep data around on the client. But as you work disconnected, there is no feedback from the server, so for example if you send an entity from client to server and it is saved there: you won't all of a sudden have an outofsync entity on the client, as that's just a copy of the object on the server. So if you want to keep on working on the client with the data, you have to consider that after step 6) you have to rewind to 1) or 2), unless you know what you can keep (read-only data for example). If you're in 6) and you rewind to 4), you're modifying data which is out of sync with the server. LLBLGen Pro doesn't provide you with a layer which takes care of that, as you should control that yourself, because only then the developer has full control over when what happens. So when you send a UnitOfWork2 object to the server, you have to realize you're in 5) moving to 6) and it's all over for that process. If that's not the case,

Page 178

then you shouldn't move from 4) to 5) there, but wait and persist the data later.

Concurrency control
Adapter contains an advanced concurrency mechanism, in such a way that you can decide how to implement concurrency control in your application. It is often better to schedule concurrency aspects at a high level in your application, however if you are required to check whether a save can take place or not, you can. As does SelfServicing, Adapter allows you to specify a predicate expression object with the SaveEntity() method. This predicate expression is included in the UPDATE query (it's ignored in an INSERT query) so you can specify exactly when a save should take place. Adapter also allows you to implement the interface IConcurrencyPredicateFactory, and instances of that interface can be inserted into entity objects. If such a factory is present inside an entity, SaveEntity() will automatically request a predicate object from that factory to include in the UPDATE query. This way you can still provide concurrency predicates during a recursive save action. To filter on the original database values fetched into the entity to be saved, you can create for example FieldCompareValuePredicate instances which use the EntityField2's DbValue property. Even though a field is changed in memory through code, the DbValue property of a field will have the original value read from the database. You can use this for optimistic concurrency schemes. See for an example the example below. If the field is NULL in the database, DbValue is null (C#) or Nothing (VB.NET). See for more information on predicates and filtering Getting started with filtering . Below an example implementation of IConcurrencyPredicateFactory, which returns predicates which test for equality on EmployeeID for the particular order. This will make sure the Save or Delete action will only succeed if the entity in the database has still the same value for EmployeeID as the in-memory entity had when it was loaded from the database.
C# VB.NET

// [C#] private class OrderConcurrencyFilterFactory : IConcurrencyPredicateFactory { public IPredicateExpression CreatePredicate( ConcurrencyPredicateType predicateTypeToCreate, object containingEntity) { IPredicateExpression toReturn = new PredicateExpression(); OrderEntity order = (OrderEntity)containingEntity; switch(predicateTypeToCreate) { case ConcurrencyPredicateType.Delete: toReturn.Add(OrderFields.EmployeeID == order.Fields[(int)OrderFieldIndex.EmployeeID] .DbValue); break; case ConcurrencyPredicateType.Save: // only for updates toReturn.Add(OrderFields.EmployeeID == order.Fields[(int)OrderFieldIndex.EmployeeID] .DbValue); break; } return toReturn; } } ' [VB.NET] Private Class OrderConcurrencyFilterFactory Implements IConcurrencyPredicateFactory Public Function CreatePredicate( _

Page 179

predicateTypeToCreate As ConcurrencyPredicateType, containingEntity As object) _ As IPredicateExpression Implements IConcurrencyPredicateFactory.CreatePredicate Dim toReturn As IPredicateExpression = New PredicateExpression() Dim order As OrderEntity = CType(containingEntity, OrderEntity) Select Case predicateTypeToCreate Case ConcurrencyPredicateType.Delete toReturn.Add(OrderFields.EmployeeID = _ order.Fields(CInt(OrderFieldIndex.EmployeeID)).DbValue) Case ConcurrencyPredicateType.Save ' only for updates toReturn.Add(OrderFields.EmployeeID = _ order.Fields(CInt(OrderFieldIndex.EmployeeID)).DbValue)) End Select Return toReturn End Function End Class

Note : In the VB.NET code above, operator overloading is used. If you're using VB.NET on .NET 1.0 or .NET 1.1, you don't have operator overloading functionality available as VB.NET for .NET 1.x doesn't support operator overloading, it was introduced in VB.NET on .NET 2.0. In the case that you're using .NET 1.x and VB.NET, create the predicates using: New FieldCompareValuePredicate(OrderFields.EmployeeID, Nothing, ComparisonOperator.Equals, order.Fields(CInt(OrderFieldIndex.EmployeeID)).DbValue) During recursive saves, if a save action fails, which can be caused by a ConcurrencyPredicateFactory produced predicate, thus if no rows are affected by the save action, an ORMConcurrencyException is thrown by the save logic, which will terminate any transaction started by the recursive save. To set an IConcurrencyPredicateFactory object when an entity is created or initialized, please see the section Adding your own code to the generated classes which discusses various ways to modify the generated code to add your own initialization code which for example sets the IConcurrencyPredicateFactory instance for a particular object. You can also set an IConcurrencyPredicateFactory instance of an entity using the ConcurrencyPredicateFactoryToUse property of an entity collection to automatically set the ConcurrencyPredicateFactoryToUse property of an entity when it's added to the particular entity collection.

Entities, NULL values and defaults
Some datatypes, like date related datatypes and strings, are not always mandatory and are set to an unknown value. In most cases this is NULL: the fields in the table are nullable and, if these fields do not yet have a value, they're set to NULL. Nullable fields often have a 'default' value set; this is a value which is inserted by the database server when a NULL is inserted in such a column. These default values are defined in the table definition itself. .NET 1.x: no support for nullable value types In .NET 1.x, NULL values aren't usable inside .NET since a valuetype, for example a field of type int/Integer, which can be NULL in the database can't be null/Nothing in .NET 1.x. If you generate code for .NET 1.x or CF.NET 1.0, LLBLGen Pro's generated code converts all NULL values for all fields which have a ValueType as .NET type to default values for that particular ValueType. These values are defined in the Helper class 'TypeDefaultValue'. You can change these default values in the TypeDefaultValue class to other values, however keep in mind that these default values are not used most of the time: you always have to test for NULL for a given field, if it was NULL when the data was fetched from the database. To test a given field if it was NULL when you read it from the database, use TestOriginalFieldValueForNull():
C# VB.NET

Page 180

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntity(customer); bool contactTitleIsNull = customer.TestOriginalFieldValueForNull(CustomerFieldIndex.Cont actTitle); ' [VB.NET] Dim customer As New CustomerEntity() Dim adapter As New DataAccessAdapter() adapter.FetchEntity(customer) Dim contactTitleIsNull As Boolean = customer.TestOriginalFieldValueForNull(CustomerField Index.ContactTitle) The variable 'contactTitleIsNull' now contains true or false, depending on the fact if the field 'ContactTitle' for the entity "CHOPS" is NULL in the database (true) or not (false). This function will return true even if you've set the field to a new value but you have't saved the entity yet. .NET 2.0: support for Nullable(Of valueType ) types In .NET 2.0, Microsoft introduced the concept of Nullable valuetypes, which means that a field of type int/Integer or any other ValueType can be null / Nothing. By default, LLBLGen Pro generates all ValueTyped fields as Nullable(Of valueType ) if the target platform is .NET 2.0 or CF.NET 2.0. You can overrule this setting on a per-field basis by setting the preference (and project property) GenerateNullableFieldsAsNullableTypes to true or false, which controls the value of the setting for each field if the field has to be generated as nullable or not. (See: Designer - Adding and editing entities ). With Nullable types for valuetyped fields, LLBLGen Pro won't convert a null / Nothing value for a field to a default value, but will return null / Nothing from the field's property. NULL values read from the database In previous versions of LLBLGen Pro, a NULL value read from the database would result in the default value for the field's type as the in-memory value. This has changed in V2 of LLBLGen Pro: if a field is NULL in the database, the in-memory value will then become null / Nothing. This means that the CurrentValue property of the field object in the entity's Fields collection (entity.Fields[index].CurrentValue) will be null / Nothing in this situation, not a default value.

Note : Reading a value from an entity's field property (e.g. myCustomer.CompanyName), and the entity field hasn't been set to a value (which is the case in a new entity where the field hasn't been set to a value), an ORMInvalidFieldReadException is thrown, if the developer has set the static flag EntityBase(2).MakeInvalidFieldReadsFatal to true (default: false). In v1 you could get away with this and use the default value returned, but this isn't allowed anymore because nullable fields lead to different results now and that would otherwise go unnoticed when you upgrade your project, if the exception isn't thrown. Use the flag and the exception to track down code errors after migrating your v1 solution to v2. Setting a field to NULL Setting a field to NULL is easy. When you create a new entity, you simply do not supply a value for a field you want to set to NULL. The INSERT query will notice that the field isn't changed (because you didn't supply a value for it), and will skip the field. If you have set a default value for that column, the database engine will automatically fill in the default value for that field in the database; this is standard database behaviour. If you want to set a field of an existing entity to NULL, you have to use a special function: SetNewFieldValue(). You can set the field's value to null/Nothing and when you then save the entity, the value in the table will be NULL. You have to use this method and not a set operation on a property, because value types like int/Integer do not accept null/Nothing as a valid value. Using this method will not bypass checks, it's the same method used by properties to set the value for the fields related to the property. Example:
C# VB.NET

Page 181

// [C#] OrderEntity order = new OrderEntity(10254); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntity(order); order.SetNewFieldValue((int)OrderFieldIndex.ShippingDate, null); adapter.SaveEntity(order); ' [VB.NET] Dim order As New OrderEntity(10254) Dim adapter As New DataAccessAdapter() adapter.FetchEntity(order) order.SetNewFieldValue(CInt(OrderFieldIndex.ShippingDate), Nothing) adapter.SaveEntity(order) On .NET 2.0, with nullable types, this is even easier:
C#, .NET 2.0 VB.NET, .NET 2.0

// [C#], .NET 2.0 OrderEntity order = new OrderEntity(10254); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntity(order); order.ShippingDate = null; adapter.SaveEntity(order); ' [VB.NET], .NET 2.0 Dim order As New OrderEntity(10254) Dim adapter As New DataAccessAdapter() adapter.FetchEntity(order) order.ShippingDate = Nothing adapter.SaveEntity(order) Usually, you won't be needing this much: most of the time fields will be set to NULL when the entity is created and will be updated with a value somewhere during the entity's lifecycle. To test if a field is currently representing a NULL value, or better: if the entity would be saved now, does the field become NULL in the database, you can use a different method: TestCurrentFieldValueForNull():
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); customer.SetNewFieldValue((int)CustomerFieldIndex.ContactTitle, null); customer.TestCurrentFieldValueForNull(CustomerFieldIndex.ContactTitle); // returns true ' [VB.NET] Dim customer As New CustomerEntity() customer.SetNewFieldValue(CType(CustomerFieldIndex.ContactTitle, Integer), Nothing) customer.TestCurrentFieldValueForNull(CustomerFieldIndex.ContactTitle)' returns true

Note : The usage of NULLs in databases should be discouraged and NULLs should only be used for fields which are optional and often not filled in with a value. In other situations, always use a default value for a NULLable column.

Extending an entity by intercepting activity calls

Page 182

During the entity's lifecycle and the actions in which the entity participates, various methods of the entity are called, and which might be a good candidate for your own logic to be called as well, for example when the entity is initialized you might want to do your own initialization as well. The entity classes offer a variety of methods for you to override so you can make your code to be called in various situations. These methods start all with On and can be found in the LLBLGen Pro reference manual in the class EntityBase2 . The entity classes also offer events for some situations, like the Initializing and Initialized events. If you want to perform a given action when one of these methods are called, you can override them in the generated entity classes, preferably using the methods discussed in Adding your own code to the generated classes .

IDataErrorInfo implementation
The .NET interface IDataErrorInfo is now implemented on EntityBase. Two methods have been added to the entities: SetEntityError and SetEntityFieldError , which allows external code to set the error of a field and/or entity. If append is set to true with SetEntityFieldError, the error message is appended to an existing message for that field using a semi-colon as separator. Entity field validation, which is triggered by the entity's method SetNewFieldValue() (which is called by a property setter), sets the field error if an exception occurs or when the custom field validator fails. The error message is appended to an existing message.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 183

Generated code - Using the entity collection classes, Adapter
Preface
Adapter contains a general purpose EntityCollection class. This is different from SelfServicing where, per entity definition in a project, LLBLGen Pro will generate an entity collection class. The EntityCollection class is located in the HelperClasses namespace in the database generic project. This class is used to work on more than one entity at the same time and it is used to retrieve more than one entity of the same type from the database. This section describes the different kinds of functionality bundled in the EntityCollection class, related to collection class and how to utilize that functionality in your code. In .NET 1.x, the EntityCollection class derives from the base class EntityCollectionBase2, which is a class in the ORMSupportClasses.

.NET 2.0 specific: generics
LLBLGen Pro v2.0 supports both .NET 1.x and .NET 2.0. For .NET 2.0, there is both the non-generic EntityCollection class, as known from the generated code for .NET 1.x, and a generic EntityCollection class: EntityCollection(Of TEntity), where TEntity is a class which both implements IEntity2 and derives (indirectly) from EntityBase2, the base class of all adapter Entity classes. The non-generic variant is used for backwards compatibility. The entities themselves use the generic variant, so CustomerEntity.Orders will be of type EntityCollection(Of OrderEntity). In this documentation, the VB.NET way of specifying generics is used, to simplify typing the documentation and to avoid needing to formulate everything twice, plus it's more describing of what it means. So for the people who are unfamiliar with the VB.NET way of defining generics: EntityCollection<B> == EntityCollection(Of B). TwoClasses scenario When you generate code using the TwoClasses preset, for .NET 2.0 the entity collections will still be of type EntityCollection(Of RelatedEntityType ). This is done to prevent compiler errors that EntityCollection(Of B) doesn't derive from EntityCollection(OF A) if B is a subtype of A. These errors can occur because C# and VB.NET don't support a phenomenon called co-variance which makes EntityCollection(Of B) a type which is castable to EntityCollection(Of A) if B is a subtype of A. In .NET 2.0, the EntityCollection(Of T) class derives from the base class EntityCollectionBase2(Of T), which is a class in the ORMSupportClasses. The non-generic EntityCollection class derives from the also nongeneric class EntityCollectionNonGeneric. EntityCollectionNonGeneric is a class used for example for design time databinding and for entitycollection usage behind the scenes. EntityCollectionNonGeneric derives from EntityCollectionBase2(Of EntityBase2).

Note : In general, the material is explained using the .NET 1.x EntityCollection class, as the difference with .NET 2.0 code is solely in the usage of generics, not in additional functionality. Where appropriate, extra .NET 2.0 code is added to illustrate the differences for novice .NET 2.0 programmers.

Entity retrieval into an entity collection object
Entity collection objects can be filled with entities retrieved from the database using several ways. Below we'll walk you through the ones you will use the most.

Using a related entity
The easiest way to retrieve a set of entities in an entity collection class, is by using a related entity, which in

Page 184

turn is used as a filter. For example, let's use our user "CHOPS" again and let's retrieve all the order entities for that customer. Note that in the following example we do not actually fetch the customer entity from the database, we only use the object and its PK value to construct the filter.
C#, .NET 1.x VB.NET, .NET 1.x C#, .NET 2.0 VB.NET, .NET 2.0

// C#, .NET 1.x CustomerEntity customer = new CustomerEntity("CHOPS"); DataAccessAdapter adapter = new DataAccessAdapter(); EntityCollection orders = customer.Orders; adapter.FetchEntityCollection(orders, customer.GetRelationInfoOrders()); ' VB.NET, .NET 1.x Dim customer As New CustomerEntity("CHOPS") Dim adapter As New DataAccessAdapter() Dim orders As EntityCollection = customer.Orders adapter.FetchEntityCollection(orders, customer.GetRelationInfoOrders()) // C#, .NET 2.0 CustomerEntity customer = new CustomerEntity("CHOPS"); DataAccessAdapter adapter = new DataAccessAdapter(); EntityCollection<OrderEntity> orders = customer.Orders; adapter.FetchEntityCollection(orders, customer.GetRelationInfoOrders()); ' VB.NET, .NET 2.0 Dim customer As New CustomerEntity("CHOPS") Dim adapter As New DataAccessAdapter() Dim orders As EntityCollection(Of OrderEntity)= customer.Orders adapter.FetchEntityCollection(orders, customer.GetRelationInfoOrders()) The entity inside 'customer' is used to construct the filter bucket created by GetRelationInfoOrders() which filters the orders in the persistent storage on the CustomerID field and value "CHOPS". Adapter does not support lazy loading. All loading of data is by hand. This has the advantage that you can transfer an EntityCollection object to another process/tier and be certain no database connection/logic is necessary or required to work with the data inside the collection. It also ensures no extra data is available to the developer/object that you didn't supply. You can filter on more fields, including filtering on fields in different entities by adjusting the RelationPredicateBucket object. The RelationPredicateBucket object is retrieved from the GetRelationInfo*() methods. You can also construct your own if you want. The EntityCollection object to fill and which is passed to the FetchEntityCollection() method has to contain a valid IEntityFactory2 implementing object. LLBLGen Pro will generate such a factory for each entity. In the example above, customer.Orders is an EntityCollection instance created inside the customer object (and created by the constructor of CustomerEntity) and already contains the valid factory object for OrderEntity objects. If Order is in an inheritance hierarchy, the fetch is polymorphic. This means that if the customer entity, in this case customer "CHOPS", has references to instances of different derived types of Order, every instance in customer.Orders is of the type it represents, which effectively means that not every instance in Orders is of the same type. See for more information about polymorphic fetchs also Polymorphic fetches .

Using a prefetch path
An easy way to retrieve a set of entities can be by using a Prefetch Path, to read related entities together with the entity or entities to fetch. See for more information about Prefetch Paths and how to use them: Prefetch Paths .

Using the collection object
The most flexible way to retrieve a set of entities in an entity collection is by simply using an instance of the

Page 185

EntityCollection object, create and fill a new RelationPredicateBucket object (or use a retrieved one as a basis) and call a DataAccessAdapter object's FetchEntityCollection(). Most of the time you can start with a bucket created by an entity instance' GetRelationInfoFieldMappedOnRelationName method. Let's concentrate on the EntityCollection that should contain / be filled with OrderEntity objects. The entity order has a rich set of different relationships, with Customer, Employee and Shipper (m:1 relations), with OrderDetails (1:n relation) and with Product (m:n relation over OrderDetail) LLBLGen Pro will add GetRelationInfo*() methods to the OrderEntity class for each related entity to make life easier for you to create RelationPredicateBucket objects to fetch collections of these related entities. Lets look at the two relation types which will end up in multiple entities being fetched: 1:n and m:n. 1:n is already addressed in the example above using a customer and its Orders collection. m:n relations are treated similarly.

Using m:n relations
When an object has one or more m:n (many to many) relationships with other entities, LLBLGen Pro will also generate easy to use RelationPredicateBucket creation methods to filter objects for these kind of relations, using the related entity. Per entity related with an m:n relation there's one GetRelationInfoField mapped on m:n relation method, which returns a ready to use RelationPredicateBucket object. Let's retrieve all orders which contain the purchase of a product X with productID 10. In the Order entity, we named the field mapped on the m:n relation Product - Order 'Orders', which thus ends up in the method name: GetRelationInfoOrders(). We're not interested in the product entity itself so that's not fetched. We pass the Orders collection directly without storing it into another reference variable.
C# VB.NET

// [C#] ProductEntity product = new ProductEntity(10); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntityCollection(product.Orders, product.GetRelationInfoOrders()); ' [VB.NET] Dim product As New ProductEntity(10) Dim adapter As New DataAccessAdapter() adapter.FetchEntityCollection(product.Orders, product.GetRelationInfoOrders()) There are multiple ways to retrieve the same data in the framework LLBLGen Pro generates for you. It's up to you which one you'll use in which situation.

Entity data manipulation using collection classes
Manipulating the entity data of more than one entity at once can be cumbersome when you work with objects that have to be loaded into memory: all entities you want to manipulate have to be instantiated into memory, you have to alter the fields of these objects and then save them individually. LLBLGen Pro offers functionality to work on entity data directly in the persistent storage. This opens up the possibility to do bulk updates or bulk deletes with a single call to a method, greatly reducing the database traffic and increasing performance. It also improves concurrency safety among threads, because you alter data directly in the shared repository, so other threads will see changes immediately. See for an example of direct updating of entities Using entity classes, Modifying an entity, option 3

Updating entities in a collection in memory
When you have loaded a set of entities in a collection and for example have bound this collection to a datagrid, the user probably has altered one or more objects' fields in the collection. You can also alter the fields yourself by looping through the objects inside the collection. When you want to save these changes to the persistent storage, you can use all save methods of the objects inside the collection, but you can also use the SaveEntityCollection() method of the DataAccessAdapter object which walks all objects inside the collection and, if the object is 'dirty', (which means, it's been changed and should be updated in the persistent storage) it is saved. This is all done in a transaction if no transaction is currently available. (See for more information about transactions the section Transactions ).

Deleting one or more entities from the persistent storage
If you wish to delete one or more entities from the persistent storage, the same problem as with updating a set of entities appears: you first have to load them into memory, call Delete() and they'll be deleted. To

Page 186

delete a set of entities from the persistent storage, you can use the DeleteEntityCollection() method of the DataAccessAdapter object to achieve your goal. This method works with the objects inside the collection and deletes them one by one from the persistent storage using an own transaction if the current collection isn't part of an existing transaction. (See for more information about transactions the section Transactions ).

Client side sorting
In v2 of LLBLGen Pro, the EntityCollection class doesn't implement IBindingList anymore, as the EntityCollection class uses the EntityView2 class to bind to grids and other controls and also let these views do the filtering and sorting of the entity collection data. To keep backwards compatibility, the Sort() methods of the EntityCollection class have been kept and work as they did in previous version of LLBLGen Pro. It's recommended you use an EntityView2 class to sort and filter an entity collection instead of using the Sort() methods directly on an entity collection. See for more information about EntityView2 classes: Generated code - using entity views with entity collections . To sort a fetched collection in memory, without going back to the database, use the entity collection method Sort (there are various overloads). This method uses internally the ArrayList's QuickSort method on the property specified (either by field index or property name). Two overloads also accept an IComparer object, which will then sort the entities based on the implementation of that IComparer object, which you can supply yourself. Below is an example how to sort a fetched EntityCollection of customers in memory, on company name.
C# VB.NET

// C# EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntityCollection(customers, null); customers.Sort((int)CustomerFieldIndex.CompanyName, ListSortDirection.Descending); For .NET 2.0, you should use this declaration to define a generic EntityCollection instance: // C# .NET 2.0 EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(new CustomerEntityFactory()); ' VB.NET Dim customers As New EntityCollection(New CustomerEntityFactory()) Dim adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, Nothing) customers.Sort(CType(CustomerFieldIndex.CompanyName, Integer), ListSortDirection.Descend ing) For .NET 2.0, you should use this declaration to define a generic EntityCollection instance: ' VB.NET Dim customers As New EntityCollection(Of CustomerEntity)(New CustomerEntityFactory())

Finding entities inside a fetched entity collection
Although it's recommended to use EntityView2 objects to filter and sort an in-memory EntityCollection object, it sometimes can be helpful to just have a quick way to find in an in-memory entity collection an entity or group of entities matching a filter. The EntityCollection class offers this facility through the method FindMatches(IPredicate) . FindMatches is a method which accepts a normal LLBLGen Pro predicate (see for more information about predicates Generated code - getting started with filtering ) and returns a list of indexes of all entities matching that predicate. As a PredicateExpression is also a predicate, you can specify a complex filter, including filters on non-field properties, to find back the entities you're looking for. On .NET 1.x, FindMatches returns an ArrayList. On .NET 2.0, FindMatches returns a List<int> / List(Of Integer). The following example finds all indexes of customer entities from the UK in the fetched entity collection of customers. FindMatches will perform an in-memory filter, it won't go to the database.
C# VB.NET

Page 187

// C# IPredicate filter = (CustomerFields.Country == "UK"); ArrayList indexes = myCustomers.FindMatches(filter); ' VB.NET Dim filter As new FieldCompareValuePredicate(CustomerFields.Country, Nothing, Comparison Operator.Equal, "UK") Dim indexes As ArrayList = myCustomers.FindMatches(filter)

Note : If you're using VB.NET and you're using .NET 2.0, you can use the symplified syntaxis using operator overloading: Dim filter As IPredicate = (CustomerFields.Country = "UK")

Note : When using FieldCompareValuePredicate with FindMatches, be sure to specify the value in the same type as teh value of the field. For example, if the field is of type Int64, and you specify as value to compare the value 1, you'll be comparing an Int64 with an Int32, which will fail. Instead, specify the value, 1, as an Int64 as well. FindMatches is the same routine which is also used by EntityView2 objects to find the entities which should belong in the view. As the routine is defined virtual / Overridable, you can tweak the way the entities are matched.

Hierarchical projections of entity collections
LLBLGen Pro allows you to create projections of the full graph of all the entities inside a given entity collection onto a DataSet or a Dictionary object. A hierarchical projection is a projection where all entities in the entity collection plus all their related entities and so on, are grouped together per entity type. Say you have the following graph in memory: a set of CustomerEntity instances, which contain each a set of OrderEntity instances and each OrderEntity instance refers to an EmployeeEntity instance. This projection functionality is implemented on the entity collection, in the method CreateHierarchicalProjection . It's implemented on the EntityCollection class and not on the EntityView2 class because it affects related entities as well, while EntityView is a view of 1-level deep on an entity collection. With LLBLGen Pro it's possible to project this graph onto a DataSet which will result in per entity type a new DataTable object with all instances of that entity type (and the data relations setup correctly). You can also project it onto a Dictionary (Hashtable in .net 1.x) with per entity type an entity collection which contains the entities of that type. Projections are defined in instances of the IViewProjectionData interface which is implemented in the ViewProjectionData class. This class combines per type projections (as shown below in the example) which are then used as one projection on the complete graph. By default, when projecting to a DataSet, only the entity types which have instances in the graph get a DataTable in the resulting DataSet. If you want to have a DataSet where there are always an expected number of DataTable instances (so for entities which aren't in the graph, they're empty), you can pre-create the DataSet and pass the pre-created DataSet to the projection routine. LLBLGen Pro's runtime library contains helper routines to produce an empty DataSet with empty DataTables, the correct columns and the proper DataRelation objects setup based on a prefetch path specified. Please consult the LLBLGen Pro Reference Manual for the GeneralUtils class' ProduceEmptyDataSet and ProduceEmptyDataTable routines.

Examples
The following examples will show you both projections (to DataSet and to Dictionary) of the earlier described graph of customers - Orders - Employees. The examples will first fetch the complete graph of customers, orders and employees and will then create a projection of that graph. Usage of custom projections per property and additional filters is also shown by the examples. Please refer to the LLBLGen Pro reference manual for details about the generic ViewProjectionData class and its constructors. .NET 1.x users should use ArrayList instances instead of List(Of T) and should use the non-generic ViewProjectionData class.

Page 188

Projection to DataSet
C# VB.NET

// C# EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(); PrefetchPath2 path = new PrefetchPath2(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(customers, null, path); } // setup projections per type. List<IEntityPropertyProjector> customerProjections = EntityFields2.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity)); // add an additional projector so the destination DataTable will have an additional col umn called 'IsNew' with // the value of the IsNew property of the customer entities. customerProjections.Add(new EntityPropertyProjector(new EntityProperty("IsNew"), "IsNew ")); List<IEntityPropertyProjector> orderProjections = EntityFields2.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType2.OrderEntity)); List<IEntityPropertyProjector> employeeProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType2.EmployeeEntity)); List<IViewProjectionData> projectionData = new List<IViewProjectionData>(); // create the customer projection information. Specify a filter so only customers from Germany // are projected. projectionData.Add(new ViewProjectionData<CustomerEntity>( customerProjections, (CustomerFields.Country == "Germany"), true)); projectionData.Add(new ViewProjectionData<OrderEntity>(orderProjections, null, false)); projectionData.Add(new ViewProjectionData<EmployeeEntity>(employeeProjections)); DataSet result = new DataSet("projectionResult"); customers.CreateHierarchicalProjection(projectionData, result); ' VB.NET Dim customers As New EntityCollection(Of CustomerEntity)() Dim path As New PrefetchPath2(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es) Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, Nothing, path) End Using ' setup projections per type. Dim customerProjections As List(Of IEntityPropertyProjector) = EntityFields2.ConvertToP rojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity)) ' add an additional projector so the destination DataTable will have an additional colu mn called 'IsNew' with ' the value of the IsNew property of the customer entities. customerProjections.Add(New EntityPropertyProjector(New EntityProperty("IsNew"), "IsNew ")) Dim orderProjections As List(Of IEntityPropertyProjector) = EntityFields2.ConvertToPro

Page 189

jectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.OrderEntity)) Dim employeeProjections As List(Of IEntityPropertyProjector) = EntityFields2.ConvertToP rojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.EmployeeEntity)) Dim projectionData As New List(Of IViewProjectionData)() ' create the customer projection information. Specify a filter so only customers from G ermany ' are projected. projectionData.Add(New ViewProjectionData(Of CustomerEntity)( _ customerProjections, (CustomerFields.Country = "Germany"), True)) projectionData.Add(New ViewProjectionData(Of OrderEntity)(orderProjections, Nothing, Fa lse)) projectionData.Add(New ViewProjectionData(Of EmployeeEntity)(employeeProjections)) Dim result As New DataSet("projectionResult") customers.CreateHierarchicalProjection(projectionData, result) The same projectors as used with the projection to the DataSet are usable with a projection to a Dictionary, which is almost equal to the DataSet example. .NET 1.x users should use a Hashtable object instead of a Dictionary object.

Projection to Dictionary
C# VB.NET

// C# EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(); PrefetchPath2 path = new PrefetchPath2(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(customers, null, path); } // setup projections per type. List<IEntityPropertyProjector> customerProjections = EntityFields2.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity)); // add an additional projector so the destination DataTable will have an additional col umn called 'IsNew' with // the value of the IsNew property of the customer entities. customerProjections.Add(new EntityPropertyProjector(new EntityProperty("IsNew"), "IsNew ")); List<IEntityPropertyProjector> orderProjections = EntityFields2.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType2.OrderEntity)); List<IEntityPropertyProjector> employeeProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType2.EmployeeEntity)); List<IViewProjectionData> projectionData = new List<IViewProjectionData>(); // create the customer projection information. Specify a filter so only customers from Germany // are projected. projectionData.Add(new ViewProjectionData<CustomerEntity>( customerProjections, (CustomerFields.Country == "Germany"), true)); projectionData.Add(new ViewProjectionData<OrderEntity>(orderProjections, null, false)); projectionData.Add(new ViewProjectionData<EmployeeEntity>(employeeProjections));

Page 190

Dictionary<Type, IEntityCollection> projectionResults = new Dictionary<Type, IEntityCol lection>(); customers.CreateHierarchicalProjection(projectionData, projectionResults); ' VB.NET Dim customers As New EntityCollection(Of CustomerEntity)() Dim path As New PrefetchPath2(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es) Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, Nothing, path) End Using ' setup projections per type. Dim customerProjections As List(Of IEntityPropertyProjector) = EntityFields2.ConvertToP rojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity)) ' add an additional projector so the destination DataTable will have an additional colu mn called 'IsNew' with ' the value of the IsNew property of the customer entities. customerProjections.Add(New EntityPropertyProjector(New EntityProperty("IsNew"), "IsNew ")) Dim orderProjections As List(Of IEntityPropertyProjector) = EntityFields2.ConvertToPro jectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.OrderEntity)) Dim employeeProjections As List(Of IEntityPropertyProjector) = EntityFields2.ConvertToP rojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.EmployeeEntity)) Dim projectionData As New List(Of IViewProjectionData)() ' create the customer projection information. Specify a filter so only customers from G ermany ' are projected. projectionData.Add(New ViewProjectionData(Of CustomerEntity)( _ customerProjections, (CustomerFields.Country = "Germany"), True)) projectionData.Add(New ViewProjectionData(Of OrderEntity)(orderProjections, Nothing, Fa lse)) projectionData.Add(New ViewProjectionData(Of EmployeeEntity)(employeeProjections)) Dim projectionResults As New Dictionary(Of Type, IEntityCollection)() customers.CreateHierarchicalProjection(projectionData, projectionResults)

Note : If you just want a structure with per entity type a collection with all the instances of that type in the entity graph, so not really a projection to new copies of the entities, please use the routine ObjectGraphUtils.ProduceCollectionsPerTypeFromGraph . The ObjectGraphUtils class is located in the ORMSupportClasses namespace and contains a variety of routines working on entity graphs. Please see the LLBLGen Pro reference manual for details on this class and this method.

Tracking entity remove actions
Removing an entity from a collection by calling entitycollection .Remove (toRemove) or entitycollection .RemoveAt (index) is an ambiguous action: do you want to remove the entity from the collection to further process the entities left, or do you want to get rid of the entity completely, both in-memory and also in the database? This is the reason why LLBLGen Pro doesn't perform deletes on the database automatically if you remove an entity from a collection, you have to explicitly specify what entities to remove.

Page 191

Tracking which entities are removed from an entity collection to be removed from the database can be a bit cumbersome if the collection is bound to a grid for example. To overcome this, LLBLGen Pro has a feature which makes an entity collection track the entities removed from it by using another entity collection. This way, you can keep track of which entities are removed from the entity collection and pass them on to a Unit of work object for persistence in one transaction together with the rest of the entities which have changed. The extra collection is necessary because an entity which is removed from the collection isn't there anymore, so it can't be referred to by the collection itself. To enable removal tracking in an entity collection, set its RemovedEntitiesTracker property to a collection into which you want to track the removed entities from the collection. This collection can then be added to a UnitOfWork2 object for deletion by using the method unitofwork2 .AddCollectionForDelete (collectionWithEntitiesToDelete) or you can delete the entities by calling DataAccessAdapter .DeleteEntityCollection (collectionWithEntitiesToDelete). The following example illustrates this.
C# VB.NET

// C# // First fetch all customers from Germany with their orders. EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(); PrefetchPath2 path = new PrefetchPath2(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(customers, new RelationPredicateBucket(CustomerFields.Country == "Germany"), path); } // we now will add a tracker collection to the orders collection of customer 0. EntityCollection<OrderEntity> tracker = new EntityCollection<OrderEntity>(); customers[0].Orders.RemovedEntitiesTracker = tracker; // after this, we can do this: customers[0].Orders.Remove(myOrder); // and myOrder is removed from the in-memory collection customers[0].Orders // and it's placed in tracker. We can now delete the entities in tracker // by using a UnitOfWork2 object or by calling adapter.DeleteEntityCollection(tracker).

' VB.NET ' First fetch all customers from Germany with their orders. Dim customers As New EntityCollection(Of CustomerEntity)() Dim path As New PrefetchPath2(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders) Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, _ new RelationPredicateBucket(CustomerFields.Country = "Germany"), _ path) End Using ' we now will add a tracker collection to the orders collection of customer 0. Dim tracker As New EntityCollection(Of OrderEntity)() customers(0).Orders.RemovedEntitiesTracker = tracker ' after this, we can do this: customers(0).Orders.Remove(myOrder)

Page 192

' and myOrder is removed from the in-memory collection customers[0].Orders ' and it's placed in tracker. We can now delete the entities in tracker ' by using a UnitOfWork2 object or by calling adapter.DeleteEntityCollection(tracker).

Note : Tracking removal of an entity isn't used by the Clear () method, because Clear is often used to clean up a collection and not to remove entities from the database, so to avoid false positives and the deletion of entities which weren't suppose to be deleted, removal tracking isn't available for the Clear method.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 193

Generated code - Using the EntityView2 class, Adapter
Preface
The EntityView2 is a class which is used to create in-memory views on an EntityCollection object (generic or non-generic) and allows you to filter and sort an in-memory EntityCollection without actually touching the data inside the EntityCollection. An EntityCollection can have multiple EntityView2 objects, similar to the DataTable - DataView combination. This section describes how to use the EntityView2 class in various different scenarios. For clarity, the .NET 1.x syntaxis is used, unless stated otherwise. In .NET 2.0, The EntityView2 class is a generic class, of type EntityView2(Of TEntity), where TEntity is an entity class which derives (indirectly) from EntityBase2 and implements IEntity2, which all generated entity classes do, similar to the generic EntityCollection(Of TEntity).

DataBinding and EntityView2 classes
The EntityCollection class doesn't bind directly to a bound control, it always bind through its EntityView2 object (returned by the property DefaultView, see below). This is a change from the approach taken by LLBLGen Pro 1.0.2005.1 and earlier, where an EntityCollection was always bound directly to a bound control. The EntityView2 approach allows you to create multiple EntityView2 instances on a single EntityCollection and all bind them to different controls as if they're different sets of data.

Creating an EntityView2 instance
Creating an EntityView2 object is simple
C#, .NET 1.x VB.NET, .NET 1.x

// C#, .NET 1.x EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); adapter.FetchEntityCollection(customers, null); // fetch all customers EntityView2 customerView = new EntityView2(customers); ' VB.NET, .NET 1.x Dim customers As New EntityCollection(new CustomerEntityFactory()) adapter.FetchEntityCollection(customers, Nothing) ' fetch all customers Dim customerView As New EntityView2(customers) With .NET 2.0, you've to define the EntityView2 with the explicit type of the collection's containing entity type, in this case CustomerEntity. This assumes you've created a generic EntityCollection of type CustomerEntity:
C#, .NET 2.0 VB.NET, .NET 2.0

// C#, .NET 2.0 EntityView2<CustomerEntity> customerView = new EntityView2<CustomerEntity>(customers); ' VB.NET, .NET 2.0 Dim customerView As New EntityView2(Of CustomerEntity)(customers) For the rest of the section, unless stated otherwise, for .NET 2.0 code, EntityView2 can be replaced with EntityView2(Of T) This creates an EntityView2 object on the EntityCollection customers, so it lets you view the data in the EntityCollection 'customers'. EntityView2 objects don't contain any data: all data you'll be able to access through an EntityView2 is actually data residing in the related EntityCollection.

Page 194

You can also use the EntityCollection's DefaultView property to create an EntityView2. This is similar to the DataTable's DefaultView property: every time you read the property, you'll get the same view object back. This is also true for the EntityCollection's DefaultView property:
C#, .NET 1.x VB.NET, .NET 1.x C#, .NET 2.0 VB.NET, .NET 2.0

// C#, .NET 1.x EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); adapter.FetchEntityCollection(customers, null); // fetch all customers EntityView2 customerView = customers.DefaultView; ' VB.NET, .NET 1.x Dim customers As New EntityCollection(new CustomerEntityFactory()) adapter.FetchEntityCollection(customers, Nothing) ' fetch all customers Dim customerView As EntityView2 = customers.DefaultView // C#, .NET 2.0 EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); adapter.FetchEntityCollection(customers, null); // fetch all customers EntityView2<CustomerEntity> customerView = customers.DefaultView; // or: // IEntityView2 customerView = customers.DefaultView; ' VB.NET, .NET 2.0 Dim customers As New EntityCollection(new CustomerEntityFactory()) adapter.FetchEntityCollection(customers, Nothing) ' fetch all customers Dim customerView As EntityView2(Of CustomerEntity) = customers.DefaultView ' or: ' Dim customerView As IEntityView2 = customers.DefaultView Instead of using the EntityView class, you can use the IEntityView interface, if you for example don't know the generic type in .NET 2.0 code. The EntityView2 constructor has various overloads which let you specify an initial filter and / or sort expression. You can also set the filter and / or sort expression later on as described below. Please familiar yourself with the various methods and properties of the EntityView2 class, by checking its entry in the LLBLGen Pro reference manual.

Filtering and sorting an EntityView2
The purpose of an EntityView2 is to give you a 'view' based on a filter and / or a sortexpression on an inmemory EntityCollection. Which data contained in the related EntityCollection is available to you through a particular EntityView2 object depends on the filter set for the EntityView2. In which order the data is available to you is controlled by the set sort expression. As the related collection is not touched, you can have as many EntityView2 objects on the same EntityCollection, all exposing different subsets of the data in the EntityCollection, in different order. Filtering and sorting an EntityView2 is done through normal LLBLGen Pro predicate and sortclause classes. See for more information about predicate classes: Getting started with filtering and The predicate system . The following example filters the aforementioned customers collection on all customers from the UK:
C# VB.NET, .NET 1.x VB.NET .NET 2.0

// C# IPredicate filter = (CustomerFields.Country == "UK"); customerView.Filter = filter;

Page 195

' VB.NET .NET 1.x Dim filter As New FieldCompareValuePredicate(CustomerFields.Country, Nothing, Comparison Operator.Equal, "UK") customerView.Filter = filter ' VB.NET .NET 2.0 Dim filter As IPredicate = (CustomerFields.Country = "UK") customerView.Filter = filter You also could have specified this filter with the EntityView2 constructor. As soon as the EntityView2's property Filter is set to a value, the EntityView2 object resets itself and will apply the set IPredicate to the related EntityCollection and all matching entity objects will be available through the EntityView2 object. This is similar to the EntityView2's sorter. Let's sort our filtered EntityView2 on CompanyName, ascending. For more information about sortclauses and sortexpression objects, please see: Generated code - Sorting .
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# ISortExpression sorter = new SortExpression(CustomerFields.CompanyName | SortOperator.As cending); customerView.Sorter = sorter; ' VB.NET, .NET 1.x Dim sorter As New SortExpression(New SortClause(CustomerFields.CompanyName, Nothing, Sor tOperator.Ascending)) customerView.Sorter = sorter ' VB.NET, .NET 2.0 Dim sorter As New SortExpression(CustomerFields.CompanyName Or SortOperator.Ascending) customerView.Sorter = sorter

.NET 2.0+: Use a Predicate(Of T) or Lambda expression (.NET 3.5) for a filter
In .NET 2.0, Microsoft introduced a new class called Predicate<T> . This is a class which is used in a couple of methods in List<T> and Array for example. In .NET 3.5, Lambda expressions were introduced, which are actually Func<T, U> (and variants) implementations. The .NET 3.5 compilers will compile a lambda expression to a Predicate<T> if the method requires a Predicate<T>, as both are under the surface simply delegates. EntityView2 has a couple of constructors which accept a Predicate<T>. This allows you to specify a lambda expression in .NET 3.5 to filter the entity collection, or if you're on .NET 2.0/3.0, you can use a delegate which compiles to Predicate<T>. The example below filters the passed in collection of CustomerEntity instances on the Country property:
C# VB.NET

EntityView2<CustomerEntity> customersFromGermany = new EntityView2<CustomerEntity>(customers, c=>c.Country=="Germany"); Dim customersFromGermany = _ New EntityView2(Of CustomerEntity)(customers, Function(c) c.Country="Germany ") Using the DelegatePredicate<T>, a developer can also use a Predicate<T> delegate or Lambda expression to filter the EntityView2 instance after it's been created:
C# VB.NET

Page 196

EntityView2<CustomerEntity> customersFromGermany = new EntityView2<CustomerEntity>(customers); customersFromGermany.Filter = new DelegatePredicate<CustomerEntity>(c=>c.Country=="Germa ny"); Dim customersFromGermany = _ New EntityView2(Of CustomerEntity)(customers) customersFromGermany.Filter = New DelegatePredicate(Of CustomerEntity)(Function(c) c.Cou ntry="Germany")

Multi-clause sorting
The EntityCollection class offers a Sort() method, which is there for backwards compatibility and was used in previous versions by the IBindingList.ApplySort() method. The Sort() method however has one drawback: it can only sort on a single field or property. What if you want to sort on multiple fields? As the EntityView2 allows you to sort the data using a SortExpression, you can specify as much fields as you want. Let's sort the customerView on City ascending and on CompanyName descending:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# ISortExpression sorter = new SortExpression(CustomerFields.City | SortOperator.Ascending ); sorter.Add(CustomerFields.CompanyName | SortOperator.Descending); customerView.Sorter = sorter; ' VB.NET .NET 1.x Dim sorter As New SortExpression(New SortClause(CustomerFields.City, Nothing, SortOperat or.Ascending)) sorter.Add(New SortClause(CustomerFields.CompanyName, SortOperator.Descending)) customerView.Sorter = sorter ' VB.NET .NET 2.0 Dim sorter As New SortExpression(CustomerFields.City Or SortOperator.Ascending) sorter.Add(CustomerFields.CompanyName Or SortOperator.Descending) customerView.Sorter = sorter What if you want to sort on a property of an entity, which isn't an entity field? After all, Sort() allows you to do that. This is also possible: to specify a property, you've to use the class EntityProperty instead of an entity field. So if you instead of sorting on CompanyName, want to sort on the entity property IsDirty , to get all the changed entities first, and then the non-changed entities, you've to use this code instead:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# ISortExpression sorter = new SortExpression(CustomerFields.City | SortOperator.Ascending ); sorter.Add(new EntityProperty("IsDirty") | SortOperator.Ascending); customerView.Sorter = sorter; ' VB.NET .NET 1.x Dim sorter As New SortExpression(New SortClause(CustomerFields.City, Nothing, SortOperat or.Ascending)) sorter.Add(New SortClause(New EntityProperty("IsDirty"), SortOperator.Ascending)) customerView.Sorter = sorter ' VB.NET .NET 2.0

Page 197

Dim sorter As New SortExpression(CustomerFields.City Or SortOperator.Ascending) sorter.Add(New EntityProperty("IsDirty") Or SortOperator.Descending) customerView.Sorter = sorter EntityProperty is usable in any construct which works with an entityfield, as long as it's in-memory sorting or filtering. Below you'll learn how to filter an EntityView2's data using an entity property.

Filtering using multiple predicates
As a PredicateExpression derives from Predicate, you can also use a PredicateExpression to filter using multiple predicates. There's a limitation however: not all predicate classes are usable for in-memory filtering: please consult the section Generated code - The predicate system which classes are usable and with which specifics. The filtering is also focussed on the entities inside the related EntityCollection, not on entities inside those entities. This thus means you can't specify a RelationCollection for example to filter all Customers who have an Order from last May. To filter the customers collection on all customers from the UK which entities have been changed, use the following code:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# IPredicateExpression filter = new PredicateExpression(CustomerFields.Country == "UK"); filter.AddWithAnd(new EntityProperty("IsDirty") == true); customerView.Filter = filter; ' VB.NET .NET 1.x Dim filter As New PredicateExpression() filter.Add(New FieldCompareValuePredicate(CustomerFields.Country, Nothing, ComparisonOpe rator.Equal, "UK")) filter.AddWithAnd(New FieldCompareValuePredicate(New EntityProperty("IsDirty"), Comparis onOperator.Equal, True)) customerView.Filter = filter ' VB.NET .NET 2.0 Dim filter As New PredicateExpression(CustomerFields.Country = "UK") filter.AddWithAnd(New EntityProperty("IsDirty") = True) customerView.Filter = filter

View behavior on collection changes
When an entity changes in the related EntityCollection of the EntityView2, it can be the entity doesn't match anymore with the filter set for the view and the EntityView2 therefore removes the entity from itself: it's no longer available to you through the EntityView2. This can be confusing so it is definable what the EntityView2 should do when the data inside the related EntityCollection changes. This is done by specifying a PostCollectionChangeAction value with the EntityView2 constructor or by setting the EntityView2's DataChangeAction property. The following list describes the various values and their result on the EntityView2's behavior: NoAction (do nothing), i.e.: don't re-apply the filter nor the sorter. ReapplyFilterAndSorter (default). Reapplies the filter and sorter on the collection. ReapplySorter. Reapplies the sorter on the collection, not the filter. By default, the EntityView2 will re-apply the filter and sorter. There's no setting for just the filter, as reapplying the filter could alter the set, which could change the order of the data as in: it's no longer ordered and has to be re-sorted. If the related collection fires a reset event (when it is sorted using its own code or cleared), the view is also reset and filters are re-applied as well as sorters. If a new entity is added to the collection through code, it is not added to the view in NoAction mode or in ReapplySorter mode, because no filter is re-applyed. If it's added through databinding, it actually is added to the view, as it is added through the EntityView2, because an EntityCollection is bound to a bound control

Page 198

via an EntityView2, either an EntityView2 object you created and bound directly or through the EntityView2 object returned by the EntityCollection's DefaultView property.

Projecting data inside an EntityView2 on another data-structure
A powerful feature of the EntityView2 class is the ability to project the data in the EntityView2 onto a new data-structure, like an EntityCollection, datatable or even custom classes (.NET 2.0 only). Projections are a way to produce custom lists of data ('dynamic lists in memory') based on the current data in the EntityView2 and a collection of projection objects . Projection objects are small objects which specify which entity field or entity property should be used in the projection and where to get the value from. For example, because the raw projection data can be used to re-instantiate new entities, the data can be used to produce a new EntityCollection with new entities. How the data is projected depends on the projection engine used for the actual projection. For more information about projections please also see: LLBLGen Pro - Fetching DataReaders and projections . Projections are performed by applying a set of projection objects onto an entity and then by passing on the result data array for further storage to a projection engine, or projector, the projected data is placed in a new instance of a class, for example an entity class, but this can also be a DataRow or a custom class. (Projections on custom classes are only supported on .NET 2.0). The array is an array of type object. You can use filters during the projection as well, to limit the set of data you want to project from the EntityView2 data. In .NET 1.x, you've to use ArrayList objects to provide the projector objects. In .NET 2.0, you can use the generic List(Of T) class.

Projection objects: EntityPropertyProjector
A projection object is an instance of the EntityPropertyProjector class. As EntityView2 objects contain Entity objects, this is the projection object you should use. LLBLGen Pro supports other projection objects as well, for general purpose projections as discussed in Fetching DataReaders and projections , however these aren't usable with EntityView2s. An EntityPropertyProjector instance contains at most two IEntityFieldCore instances (for example normal EntityField2 objects or an EntityProperty object) and a Predicate, for example a FieldCompareValuePredicate, or a PredicateExpression. The first IEntityFieldCore instance is mandatory. This is the default value. If a Predicate is specified (optional), and it resolves to true, the default value (thus the first IEntityFieldCore) is used, otherwise the second IEntityFieldCore instance. This way you can select per entity from two fields, for example SomeEntity.Name1 and SomeEntity.Name2, based on the predicate specified either the value of field Name1 of the entity, if the predicate resolves to true, otherwise the value of Name2. The EntityPropertyProjector also contains a Name property which is used to produce the name of the result field. The projection routine used is free to use this name for column purpose (projection onto a datatable) but can also use it for entity field setting (projection onto an entity). If a developer wants to execute a piece of code onto the value prior to storing it into the projected slot, the developer can derive his own class from EntityPropertyProjector and override ValuePostProcess(). This routine is normally empty and expects the value and the entity being processed. It all might sound a little complex, but it's fairly straigt forward, as will be shown in a couple of examples below. Projecting an EntityView2's data is done by the CreateProjection routine of an EntityView2 object. LLBLGen Pro comes with three different projection engines: one for projecting data onto a DataTable (the class DataProjectorToDataTable), one for projecting data onto an EntityCollection (the class DataProjectorToEntityCollection) and on .NET 2.0 also one for projecting data onto a list of custom classes (the class DataProjectorToCustomClass). You can write your own projection engine as well: simply implement the interface IEntityDataProjector to be able to use the engine in projections of EntityView2 data. If you also want to use the same engine in projections of resultsets as discussed in Fetching DataReaders and projections , you also should implement the almost similar interface IGeneralDataProjector. Because the interfaces can re-use the actual projection engine logic, it's easy to re-use projection code for both projection mechanisms. Only the data which is available to you through the EntityView2 can possibly be projected. You can't project nested data inside entities nor entity data not in the EntityView2. In that case, create a new EntityView2 on the same EntityCollection using a different filter and project that EntityView2 object instead. Creating EntityPropertyProjector instances for all entity fields. Sometimes you want to project all fields of a given entity and it can be cumbersome to create a lot of EntityPropertyProjector objects if your entity has a lot of fields. Instead, you can use the shortcut method on

Page 199

EntityFields2 : EntityFields2.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.entityname Entity)) This method will return List of IEntityPropertyProjector objects, one for each entity field of the specified entity type.

Examples of EntityView2 projections
Projection to datatable.
C# VB.NET

// C# EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); adapter.FetchEntityCollection(customers, null); // fetch all customers // create a view of all customers in germany EntityView2 customersInGermanyView = new EntityView2( customers, (CustomerFields.Country == "Germany"), null ); // create projection of these customers of just the city and the customerid. // for that, define 2 propertyprojectors, one for each field to project ArrayList propertyProjectors= new ArrayList(); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.City, "City" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ); DataTable projectionResults = new DataTable(); // create the actual projection. customersInGermanyView.CreateProjection( propertyProjectors, projectionResults ); ' VB.NET Dim customers As New EntityCollection(New CustomerEntityFactory()) adapter.FetchEntityCollection(customers, nothing) ' fetch all customers ' create a view of all customers in germany Dim customersInGermanyView As New EntityView2( customers, _ New FieldCompareValuePredicate(CustomerFields.Country, Nothing, ComparisonOperator.Equa l, "Germany"), Nothing) ' create projection of these customers of just the city and the customerid. ' for that, define 2 propertyprojectors, one for each field to project Dim propertyProjectors As New ArrayList() propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.City, "City" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ) Dim projectionResults As New DataTable() ' create the actual projection. customersInGermanyView.CreateProjection( propertyProjectors, projectionResults ) After this code, the datatable projectionResults contains two columns, City and CustomerID, and it contains the data for the fields City and CustomerId of each entity in the EntityView2, which are all entities with Country equal to "Germany". Projection to EntityCollection The following example performs a projection onto an EntityCollection. It uses the entities from Concepts Entity inheritance and relational models , where Clerk is another subtype of Employee.
C# VB.NET

// C# // fetch all managers EntityCollection managers = new EntityCollection(new ManagerEntityFactory()); adapter.FetchEntityCollection(managers, null);

Page 200

// now project them onto 2 new clerk entities, by just projecting the employee fields ArrayList propertyProjectors = new ArrayList(); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.Id, "Id" ) ); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.Name, "Name" ) ); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.StartDate, "StartDat e" ) ); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.WorksForDepartmentId , "WorksForDepartmentId" ) ); EntityCollection clerks = new EntityCollection(new ClerkEntityFactory()); EntityView2 managersView = managers.DefaultView; // project data to transform all managers into clerks. ;) managersView.CreateProjection( propertyProjectors, clerks ); ' VB.NET ' fetch all managers Dim managers As New EntityCollection(new ManagerEntityFactory()) adapter.FetchEntityCollection(managers, Nothing) ' now project them onto 2 new clerk entities, by just projecting the employee fields Dim propertyProjectors As New ArrayList() propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.Id, "Id" ) ) propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.Name, "Name" ) ) propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.StartDate, "StartDat e" ) ) propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.WorksForDepartmentId , "WorksForDepartmentId" ) ) Dim clerks As New EntityCollection(new ClerkEntityFactory()) Dim managersView As EntityView2 = managers.DefaultView ' project data to transform all managers into clerks. ;) managersView.CreateProjection( propertyProjectors, clerks ) After this code, the collection clerks contains ClerkEntity instances with only the EmployeeEntity fields (inherited by ClerkEntity from its base type EmployeeEntity, which is also the base type of ManagerEntity) filled with data. .NET 2.0: projection to custom classes This code is .NET 2.0 or higher, due to the generics used in the DataProjectorToCustomClass projector engine. With some reflection, it is possible to create such a class for .NET 1.x, though the class itself has to be setup a little different. The code below also shows how to use the projectors in .NET 2.0. It uses the class TestCustomer which is given below the projection example code (in C#). The projection also shows how to project a property of an entity which isn't an entity field, namely IsDirty, using the EntityProperty class.
C#, .NET 2.0 VB.NET, .NET 2.0

// C#, .NET 2.0 EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(new CustomerEntityFactory()); adapter.FetchEntityCollection(customers, null); EntityView2<CustomerEntity> allCustomersView = customers.DefaultView; List<TestCustomer> customCustomers = new List<TestCustomer>(); DataProjectorToCustomClass<TestCustomer> customClassProjector = new DataProjectorToCustomClass<TestCustomer>( customCustomers ); List<IEntityPropertyProjector> propertyProjectors = new List<IEntityPropertyProjector>() ; propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.City, "City" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CompanyName, "Compan yName" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.Country, "Country" ) );

Page 201

propertyProjectors.Add( new EntityPropertyProjector( new EntityProperty("IsDirty"), "IsD irty" ) ); // create the projection allCustomersView.CreateProjection( propertyProjectors, customClassProjector ); ' VB.NET .NET 2.0 Dim customers As New EntityCollection(Of CustomerEntity)(New CustomerEntityFactory()) adapter.FetchEntityCollection(customers, Nothing) Dim allCustomersView As EntityView2(Of CustomerEntity) = customers.DefaultView Dim customCustomers As New List(Of TestCustomer)() Dim customClassProjector As New DataProjectorToCustomClass(Of TestCustomer)( customCusto mers ) Dim propertyProjectors As New List(Of IEntityPropertyProjector)() propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.City, "City" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.CompanyName, "Compan yName" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.Country, "Country" ) ) propertyProjectors.Add( New EntityPropertyProjector( new EntityProperty("IsDirty"), "IsD irty" ) ) ' create the projection allCustomersView.CreateProjection( propertyProjectors, customClassProjector ) The custom class, TestCustomer:

/// /// Test class for projection of fetched entities onto custom classes using a custom proj ector. /// public class TestCustomer { #region Class Member Declarations private string _customerID, _companyName, _city, _country; private bool _isDirty; #endregion public TestCustomer() { _city = string.Empty; _companyName = string.Empty; _customerID = string.Empty; _country = string.Empty; _isDirty = false; } #region Class Property Declarations public string CustomerID { get { return _customerID; } set { _customerID = value; } } public string City { get { return _city; } set { _city = value; } }

Page 202

public string CompanyName { get { return _companyName; } set { _companyName = value; } } public string Country { get { return _country; } set { _country = value; } } public bool IsDirty { get { return _isDirty; } set { _isDirty = value; } } #endregion }

Distinct projections.
It can be helpful to have distinct projections: no duplicate data in the projection results. Distinct projections are supported, as the following example will show. Creating a distinct projection is simply passing false / False for allowDuplicates in the CreateProjection method. The following example shows a couple of projection related aspects: it filters the entity view's data using a Like predicate prior to projecting data, so you can limit the data inside an EntityView2 used for the projection, and it shows an example how a predicate is used to choose between two values in an entity to determine the end result of projecting an entity. The example uses Northwind like most examples in this documentation. The code contains Assert statements, which are left to show you how many elements to expect at that point in the routine.

EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>( new CustomerEntityFactory() ); adapter.FetchEntityCollection( customers, null ); EntityView2<CustomerEntity> customersInGermanyView = new EntityView2<CustomerEntity>( customers, (CustomerFields.Country == "Germany"), null ); Assert.AreEqual( 11, customersInGermanyView.Count ); // create straight forward projection of these customers of just the city and the custome rid. List<IEntityPropertyProjector> propertyProjectors= new List<IEntityPropertyProjector>(); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.City, "City" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Customer ID" ) ); DataTable projection = new DataTable(); customersInGermanyView.CreateProjection( propertyProjectors, projection ); Assert.AreEqual( 11, projection.Rows.Count ); // do distinct filtering during the following projection. It projects ContactTitle and Is New propertyProjectors = new List<IEntityPropertyProjector>(); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.ContactTitle, "Contac t title" ) ); // any entity property can be used for projection source. propertyProjectors.Add( new EntityPropertyProjector( new EntityProperty( "IsNew" ), "Is n

Page 203

ew" ) ); projection = new DataTable(); customersInGermanyView.CreateProjection( propertyProjectors, projection, false ); Assert.AreEqual( 7, projection.Rows.Count ); // do distinct filtering and filter the set to project. Re-use previous property projecto rs. // 3 rows match the specified filter, distinct filtering makes it 2. projection = new DataTable(); customersInGermanyView.CreateProjection( propertyProjectors, projection, false, (Customer Fields.ContactTitle % "Marketing%") ); Assert.AreEqual( 2, projection.Rows.Count ); // use alternative projection source based on filter. projection = new DataTable(); propertyProjectors = new List<IEntityPropertyProjector>(); // bogus data, but performs what we need: for all contacttitles not matching the filter, CustomerId is used. propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.ContactTitle, "Contact title", (CustomerFields.ContactTitle % "Marketing%"), CustomerFields.CustomerI d) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Customer ID" ) ); // create a new projection, with distinct filtering, which gives different results now, b ecause ContactTitle is now sometimes equal to CustomerId customersInGermanyView.CreateProjection( propertyProjectors, projection, false ); Assert.AreEqual( 11, projection.Rows.Count ); foreach( DataRow row in projection.Rows ) { if( !row["Contact title"].ToString().StartsWith( "Marketing" ) ) { Assert.AreEqual( row["Contact title"], row["CustomerID"] ); } } Aggregates aren't supported in in-memory projections though Expressions are. All expressions are fully evaluated, where '+' operators on strings result in string concatenations. The new DbFunctionCall object to call database functions inside an Expression object is ignored during expression evaluation.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 204

Generated code - Using the context, Adapter
Preface
Selfservicing and Adapter both support so called uniquing contexts. These contexts, implemented in the Context class, available in the ORM Support classes, represent a semantic context in your program. Within such a semantic context, the representing Context object, assures that an entity loaded by the framework is loaded in just one object. This is required for for example refetching trees of objects using prefetch paths, or for usage where more than one object with the same data is problematic. This section discusses the Context object more in detail. It is not necessary to use a Context object in your application, for example in stateless environments like ASP.NET, it's not of much use. Though it can sometimes be required to have just one entity class instance with a given entity's data in a given semantic context, for example in an edit form in a windows forms application.

The Context class
Context objects have to be created by the developer and live as long as the developer wants and keep objects in their cache as long as the Context objects live. A developer can create multiple Context objects to create different semantic contects in which entity objects are unique. This can help when the developer doesn't want two screens with the same object listed to assure that editing the entity on one screen doesn't automatically alter the other instance as well (because the user can click Cancel for example). The Context class works with instances of entity classes. This means that when you pass an entity instance using remoting or webservices to a server which then returns it back after processing, you won't reference the same instance of the entity you sent to the server. This is because of the serialization and deserialization which takes place during remoting. The Context class identifies entities using the PK field values. This means that new entities aren't directly added to the Context's internal object cache, as for example Identity columns won't have a value for the PK field until they're saved and the transaction has been committed. When an entity is saved and the transaction is committed (if any), the entity is added to the Context's cache if the entity is new. A non-new entity object which is added to a Context object, is directly added to the Context's internal object cache, if the entity object hasn't been added to another Context already. Entity objects can be added to the Context at any time, as well as Entity collection objects. When an entity object is added to a Context, its internally referenced related entity objects and entity collection objects are added to the same Context object as well. When an entity collection is added to a Context object, all its entity objects will be added to the Context and every entity object added to the entity collection after that point will be added to the same Context object the collection is added to as well, which makes it easy to work with a Context object, as it is mostly transparent. The Context objects don't act as a cache which is used to prevent database activity. Every query is still executed on the database. If an entity is already loaded in the used Context object, the entity data is not added to a new entity object, but the entity object already loaded is updated. If the already loaded object is dirty, the data isn't updated and the loaded entity data is simply skipped, and the already loaded entity object is returned as is. This is done in the Get routine of the Context object. The Context has a flag to disallow this particular action: SetExistingEntityFieldsInGet . See the LLBLGen Pro reference manual for details on this flag. You can of course use the Context as an object cache for single object fetches, though keep in mind that a Context object is simply acting as a unique instance supplier, it doesn't fetch objects from the database, so if you request an entity instance from a Context object using Get and the Context object can't find it in its cache, you have to test if the returned object is indeed a fetched entity object or a new entity object. The Context objects don't fetch data for themselves to keep the fetch logic placed at just a few known places, to avoid fragmentation of this logic which could blur the overview where the code actually performs fetch activity. Adding an entity object which is already present in the Context is a no-op, as well as when an entity object is already part of another Context object. After an entity is added to a Context object, and a 1:1/m:1

Page 205

reference is set to an entity class instance, the related entity is not added to the context automatically, this has to be done manually by the developer, though when an entity is added to a collection which is added to a context, the entity is added to that context as well. The Context object an entity is added to is returned by the entity's ActiveContext property. When an entity is deleted, the status of the entity is set to Deleted by the delete routines. The Context.Get method will remove an entity from the store if the entity is deleted and not participating in a transaction. Till then, the entity is kept in the Context's object cache.

Using the Context class
The Context class should be seen as a convenience providing class for uniquing within a semantic context. It shouldn't be confused with a UnitOfWork + Object Fetch object, because it leaves that functionality to other objects and methods.

Retrieving instances from a Context
A Context object supplies a Get method which offers different ways to retrieve the already loaded instance for a given entity. As a Context object uses the value(s) of the PK field(s), you can use this to retrieve the unique instance. Below are the different ways illustrated: it will try to retrieve the instance which already contains the entity data for the customer with CustomerID "CHOPS".
C# VB.NET

// C# // using a factory CustomerEntity c = (CustomerEntity)myContext.Get(new CustomerEntityFactory(), "CHOPS"); // using a fetched entity CustomerEntity c = new CustomerEntity("CHOPS"); adapter.FetchEntity(c); c = (CustomerEntity)myContext.Get(c); ' VB.NET ' using a factory Dim c As CustomerEntity = CType(myContext.Get(new CustomerEntityFactory(), "CHOPS"), Cus tomerEntity) ' using a fetched entity Dim c As New CustomerEntity("CHOPS") adapter.FetchEntity(c) c = CType(myContext.Get(c), CustomerEntity)

Single entity fetches
Creating an entity object using a normal constructor, creates a new instance of the entity class. Fetching data into that object will then be loaded into a new instance. To be able to load the entity's data into a new entity class instance if the Context used doesn't have an instance with that data present and just return the already loaded instance if the Context does have an instance of the entity class with the entity data, use the construct mentioned above:
C# VB.NET

// C# CustomerEntity c = new CustomerEntity("CHOPS"); adapter.FetchEntity(c); c = (CustomerEntity)myContext.Get(c); ' VB.NET Dim c As New CustomerEntity("CHOPS") adapter.FetchEntity(c)

Page 206

c = CType(myContext.Get(c), CustomerEntity) This will fetch customer "CHOPS" from the database but the context will check if the entity is already loaded in this context. If so, it will return that instance, not the newly fetched instance. If the entity object isn't known by the Context, it is added to the Context and the Context returns the instance passed to the Get() method call. Entities can also be added manually first and then fetched:
C# VB.NET

// C# CustomerEntity c = new CustomerEntity("CHOPS"); myContext.Add(c); adapter.FetchEntity(c); ' VB.NET Dim c A New CustomerEntity() myContext.Add(c) adapter.FetchEntity(c) Or, using a unique constraint:
C# VB.NET

// C# CustomerEntity c = new CustomerEntity(); c.CompanyName = "Foo Inc."; myContext.Add(c); adapter.FetchEntityUsingUniqueConstraint(c, c.ConstructFilterForUCCompanyName()); ' VB.NET Dim c A New CustomerEntity() c.CompanyName = "Foo Inc." myContext.Add(c) adapter.FetchEntityUsingUniqueConstraint(c, c.ConstructFilterForUCCompanyName()) Though it has to be understood that the actual instance 'c' is only unique if the particular entity hasn't been loaded yet. This is due to the c = new CustomerEntity() line. Fetching using unique constraints is a bit problematic in this case. To avoid that you can do:
C# VB.NET

// C# CustomerEntity c = new CustomerEntity(); c.CompanyName = "Foo Inc."; myContext.Add(c); adapter.FetchEntityUsingUniqueConstraint(c, c.ConstructFilterForUCCompanyName()); c = (CustomerEntity)myContext.Get(c); // get unique version. No db activity. ' VB.NET Dim c As New CustomerEntity() c.CompanyName = "Foo Inc." myContext.Add(c) adapter.FetchEntityUsingUniqueConstraint(c, c.ConstructFilterForUCCompanyName()) c = CType(myContext.Get(c), CustomerEntity) ' get unique version. No db activity.

Prefetch Path fetches
Fetching an entity or set of entities and using a prefetch path can fully utilize a Context by one of the

Page 207

overloads of FetchEntityCollection, FetchEntity, FetchNewEntity etc., which accepts a Context object. If you're fetching a graph and you want to have for every already loaded entity in a particular Context the instance in which the entity is already loaded, you can pass in the context in which these entity objects already are added to. The fetch logic will then build the object graph using the instances from the passed in Context, otherwise it will read the entity data in newly created entity objects. Below is an example which uses a Context object to fetch additional nodes of an object graph after parts of the graph were fetched earlier. The example shows this in one routine, but these two activities could take place in separate routines of course.
C# VB.NET

// C# DataAccessAdapter adapter = new DataAccessAdapter(); try { Context myContext = new Context(); CustomerEntity customer = new CustomerEntity("BLONP"); IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.CustomerEntity); prefetchPath.Add(CustomerEntity.PrefetchPathOrders); adapter.FetchEntity(customer, prefetchPath, myContext); // ... // customer and its orders are now loaded. Say we want to add later on the order detail s // of each order of this customer to this graph. We can do that with the following code . // redefine the prefetch path, as if we're somewhere else in the application prefetchPath = new PrefetchPath2((int)EntityType.CustomerEntity); prefetchPath.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPat hOrderDetails); // fetch the customer again. As it is already added to a Context (myContext), the fetch logic // will use that Context object. This fetch action will fetch all data again, but into the same // objects and will for each Order entity loaded load the Order Detail entities as well . adapter.FetchEntity(customer, prefetchPath); } finally { adapter.Dispose(); } ' VB.NET Dim adapter As New DataAccessAdapter() Try Dim myContext As New Context() Dim customer As New CustomerEntity("BLONP") Dim prefetchPath As New PrefetchPath2(CInt(EntityType.CustomerEntity)) prefetchPath.Add(CustomerEntity.PrefetchPathOrders) adapter.FetchEntity(customer, prefetchPath, myContext) ' ... ' customer and its orders are now loaded. Say we want to add later on the order details ' of each order of this customer to this graph. We can do that with the following code.

Page 208

' redefine the prefetch path, as if we're somewhere else in the application prefetchPath = new PrefetchPath2(CInt(EntityType.CustomerEntity)) prefetchPath.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPat hOrderDetails) ' fetch the customer again. As it is already added to a Context (myContext), the fetch logic ' will use that Context object. This fetch action will fetch all data again, but into t he same ' objects and will for each Order entity loaded load the Order Detail entities as well. adapter.FetchEntity(customer, prefetchPath) Finally adapter.Dispose() End Try

Note : When fetching an entity collection, you've to add the collection to fetch to the context object and then call the FetchEntityCollection method

Entity Save calls
When an entity is saved, the DataAccessAdapter class will signal the Context the entity is in (if any), that the entity in question is saved and if the entity is new, it should be moved to the normal object cache inside the Context, if the entity is not in a transaction. If the entity is in a transaction, this activity is performed after the transaction is committed, in the entity's base class transaction commit routine. This is done to prevent that the Context object will move a new entity to the object cache even though the transaction rolled back. If a recursive save saves an entity which is not yet in the active context, the entity is added to the active context.

Multi-entity activity
Actions on entity collections work inside the active context if the collection is first added to a context. All persistence logic will re-use objects from the Context object if the entity collection used is added to a context. SaveEntityCollection() will first add any entities saved to the context the collection is in, if the entity isn't already in the context.

Remarks
PK values shouldn't be changed. The context relies on non-changing PK values. A Context shouldn't be used as a cache, nor should it kept alive for a long time, just long enough for the semantic context to use unique objects in. Deleted entities which are deleted in the database directly are not picked up by the Context. This is something the developer has to take into account when deleting entities directly. As the Context class doesn't use any locking mechanism, the Context object isn't thread-safe and should be used for single-thread semantic contexts
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 209

Generated code - Using the typed view classes, Adapter
Preface
LLBLGen Pro supports read-only lists based on a database view (Typed View) or a selection of entity fields from one or more entities with a relation (not m:n) (Typed List). Although both elements are different, they both will be generated as a typed DataTable, which is a variant of the typed DataSet concept included in Visual Studio.NET. A typed DataTable is a class which derives from the .NET DataTable and defines properties and a row class to access the individual fields in a typed fashion. In this section the typed view classes are briefly discussed and their usage is illustrated using examples.

Instantiating and using a Typed View
As described in the concepts , a Typed View definition is a 1:1 mapping of a database view on an element in an LLBLGen Pro project. A typed view contains, for each database view column, a field with the same name or the name you gave it. When the typed view element is generated into code, it will end up as a class derived (indirectly) from DataTable, a typed DataTable to be exact, which is usable as a read-only list. You can limit the number of rows using standard LLBLGen Pro filtering techniques. These techniques use Predicate Expressions and relations included in a RelationPredicateBucket object. (See: Getting started with filtering ). Because the typed DataTable is derived from the .NET DataTable class, it can be used to create DataView objects. With DataView objects you can apply specify additional filtering, sorting, calculations etc. As an illustration, we'll include the view 'Invoices' from the Northwind database, and use the Typed View 'Invoices' for the code examples.

Instantiating and filling a Typed View
To create an instance of the 'Invoices' typed view and fill it with all the data in the view, the following code is sufficient:
C# VB.NET

// [C#] InvoicesTypedView invoices = new InvoicesTypedView(); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchTypedView(invoices.GetFieldsInfo(), invoices); ' [VB.NET] Dim invoices As New InvoicesTypedView() Dim adapter As New DataAccessAdapter() adapter.FetchTypedView(invoices.GetFieldsInfo(), invoices) The rows will be added as they are received from the database provider; no sorting nor filtering will be applied. Furthermore, all rows in the view are read, which is probably not what you want. Let's filter on the rows, so the Fill() method will only return those rows with an OrderID larger than 11000, and do not filter out duplicate rows:
C# VB.NET

// [C#] InvoicesTypedView invoices = new InvoicesTypedView(); DataAccessAdapter adapter = new DataAccessAdapter();

Page 210

RelationPredicateBucket bucket = new RelationPredicateBucket(); bucket.PredicateExpression.Add(InvoicesFields.OrderId > 11000); adapter.FetchTypedView(invoices.GetFieldsInfo(), invoices, bucket, true); ' [VB.NET] Dim invoices As New InvoicesTypedView() Dim adapter As New DataAccessAdapter() Dim bucket As New RelationPredicateBucket() bucket.PredicateExpression.Add(New FieldCompareValuePredicate(InvoicesFields.OrderId, No thing, ComparisonOperator.GreaterThan, 11000)) adapter.FetchTypedView(invoices.GetFieldsInfo(), invoices, bucket, True) The overloaded FetchTypedView() versions also accept one or more of these other parameters: maxNumberOfItemsToReturn, which is used to limit the number of rows returned. When this parameter is set to 0, it is ignored (all rows are returned) sortClauses, which is a collection of SortClause objects and is used to sort the rows in the table before they're added to the Typed View object. If this parameter is set to null, no sorting is performed. Specifying a filter will narrow down the number of rows to the ones matching the filter. The filter can be as complex as you want. See for filtering information and how to set up sorting clauses Getting started with filtering and Sorting .

Reading a value from a filled Typed View
After we've filled the typed view object, we can use the values read. As said, Typed View and Typed List objects in the form of a typed DataTable are read-only , and therefore extremely handy for filling lists on the screen or website, but not usable for data manipulation. For modifying data you should use the entity classes/collection classes. Below, we'll read a given value from row 0, the value for the Sales person. We assume the invoices object is filled with data using any of the previously mentioned ways to do so.
C# VB.NET

// [C#] string salesPerson = invoices[0].Salesperson; ' [VB.NET] Dim salesPerson As String = invoices(0).Salesperson That's it. The '0' points to the row, and the row is 'typed', thus has named properties for the individual columns in the object; you can just read the value using a property. Null values Because the TypedView (and TypedList) classes are derived classes from DataTable, the underlying DataTable cells still contain System.DBNull.Value values if the field in the database is NULL. You can test for NULL by using the generated methods IsFieldName Null(). When reading a field which value is System.DBNull.Value in code, like the example above, will result in the default value for the type of the field, as defined in the TypeDefaultValue class. Databinding will result in the usage of a DataView, as that's build into the DataTable, which will then return the System.DBNull.Value values and not the TypeDefaultValue values.

Limiting and sorting a typed view
When sorting the data in a typed view we're not actually sorting the data in the object, but sorting the data before it is read into the object. This is achieved by using a sort operator in the actual SQL query. To do this, you specify a set of SortClauses to the Fill() method. Below is an example sorting the invoices typed view on the field 'ExtendedPrice' in descending order. Sort clauses are easily created using the SortClause factory in the generated code. We pass the same filter as mentioned earlier and we assume the variable adapter still holds a reference to a DataAccessAdapter instance.
C# VB.NET

Page 211

// [C#] invoices.Clear(); // clear al current data ISortExpression sorterInvoices = new SortExpression(InvoicesFields.ExtendedPrice | SortO perator.Descending); adapter.FetchTypedView(invoices.GetFieldsInfo(), invoices, bucket, 0, sorter, true); ' [VB.NET] invoices.Clear() ' clear al current data Dim sorterInvoices As ISortExpression = New SortExpression( _ New SortClause(InvoicesFields.ExtendedPrice, Nothing, SortOperator.Descending)) adapter.FetchTypedView(invoices.GetFieldsInfo(), invoices, bucket, 0, sorter, True) The rows are now sorted on the ExtendedPrice field, in descending order.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 212

Generated code - Using the typed list classes, Adapter
Preface
LLBLGen Pro supports read-only lists based on a database view (Typed View) or a selection of entity fields from one or more entities with a relation (not m:n) (Typed List). Although both elements are different, they both will be generated as a typed DataTable, which is a variant of the typed DataSet concept included in Visual Studio.NET. A typed DataTable is a class which derives from the .NET DataTable and defines properties and a row class to access the individual fields in a typed fashion. In this section the typed list classes are briefly discussed and their usage is illustrated using examples. It's recommended you read the Using the typed view classes, SelfServicing section as well, as it discusses constructs which are shared with the Typed List.

Instantiating and using a Typed List
Using a Typed List is similar to using a Typed View, both will be generated as typed DataTables. There is a difference however in how you formulate filters and sort clauses for a typed list. The reason for this is that a typed list is constructed using existing entity fields, while a typed view uses its own field definitions. If you want to construct a filter for a typed list, you have to specify field indexes of fields in the entities which are the base of the typed list ; you can filter on fields which are not included in the typed list itself as in the column set of the typed list. A Typed List contains code to help you construct the RelationPredicateBucket objects for filtering. Use the GetRelationInfo() method of the Typed List object to get an initial bucket with essential information for fetching the data of the Typed List. Because RelationPredicateBucket objects contain an already existing PredicateExpression object you can directly start adding filter predicates to the bucket. A Typed List is filled similar to the Typed View objects, with one difference: you should use DataAccessAdapter.FetchTypedList() instead of DataAccessAdapter.FetchTypedView(). As an example of this, we construct a typed list from the entities Customer and Order and include the following fields in the resultset: Order.OrderID, Order.OrderDate, Order.ShippedDate, Customer.CustomerID, Customer.CompanyName. (Do this by checking these fields in the field list in the typed list editor). You can now filter on any field in Order or Customer or both. Also, you can sort on any field in Order or Customer or both. Let's filter this typed list on all orders from customers from 'Brazil', and sort the list on the field Order.Freight, ascending. The typed list is called OrderCustomer.
C# VB.NET

// [C#] OrderCustomerTypedList orderCustomer = new OrderCustomerTypedList(); DataAccessAdapter adapter = new DataAccessAdapter(); IRelationPredicateBucket bucket = orderCustomer.GetRelationInfo(); bucket.PredicateExpression.Add(CustomerFields.Country == "Brazil"); ISortExpression sorter = new SortExpression(OrderFields.OrderId | SortOperator.Ascending ); // Set allowDuplicates to true, because we sort on a field that is not in our resultset and we use SqlServer. adapter.FetchTypedList(orderCustomer.GetFieldsInfo(), orderCustomer, bucket, 0, sorter, false); ' [VB.NET] Dim orderCustomer As New OrderCustomerTypedList() Dim adapter As New DataAccessAdapter()

Page 213

Dim bucket As IRelationPredicateBucket = orderCustomer.GetRelationInfo() bucket.PredicateExpression.Add(New FieldCompareValuePredicate(CustomerFields.Country, No thing, ComparisonOperator.Equal, "Brazil")) Dim sorter As ISortExpression = New SortExpression(New SortClause(OrderFields.OrderId, N othing, SortOperator.Ascending)) ' Set allowDuplicates to true, because we sort on a field that is not in our resultset a nd we use SqlServer. adapter.FetchTypedList(orderCustomer.GetFieldsInfo(), orderCustomer, bucket, 0, sorter, False) The Typed List object is now filled with the rows for the 5 columns we've specified in the Typed List editor, sorted on Order.Freight ascending and filtered on Customer.Country equals "Brazil".

Note : If you're using one of the FetchTypedList overloads of DataAccessAdapter which accepts IEntityFields2 and an IRelationPredicateBucket, you have to pass the object you get from typedlist .GetFieldsInfo() as the fieldCollectionToFetch value and the object you get from typedlist .GetRelationInfo() as the filterBucket parameter. Otherwise relations of the typed list aren't detected. If you don't want to filter the typedlist, use the overloads of FetchTypedList which accept a typedListToFill parameter. See the reference manual for details.

Note : TypedLists still offer the functionality of Weak Relations through the property ObeyWeakRelations. (See for a description of weak relations this section in Filtering and Sorting ). It's however recommended to use the JoinHint specifications for the relations in the TypedList editor in the LLBLGen Pro Designer.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 214

Generated code - Using dynamic lists, Adapter
Preface
LLBLGen Pro offers you to create lists in code, without the necessity of the designer. This sometimes can become handy if you just want to pull a small list of data from the database without having to re-generate the code again. The following paragraph discusses briefly how to create a dynamic list in code. Dynamic lists are using the same building blocks as Typed View and Typed List classes use and can be used with normal filters and other constructs like group by.

LLBLGen Pro v2.0's different ResultsetFields classes
In LLBLGen Pro v1.0.2005.1 and earlier, the generated ResultsetFields class in the HelperClasses namespace contained a lot of overloads of the DefineField() method. By default, LLBLGen Pro still generates this code, however also offers a new way to define fields in a ResultsetFields object which is more efficient, namely instead of using the FieldIndex enums, it uses the generated entityname Fields classes. If you're creating a new project with LLBLGen Pro you can avoid having all the DefineField overloads in the ResultsetFields class, by moving the BackwardsCompatibility templatebindings to the bottom of the list in the Generator configuration dialog. Please see: Designer - Generating code . By default the BackwardsCompatibility template bindings take precedence over the more compact newer templates to let people who upgrade existing code to LLBLGen Pro v2.0. The examples in this section use the new approach with DefineFields(field object, ...);

Creating dynamic lists
Typed lists are great, however sometimes you need a small list of data, build from one or more entities and use it in a read-only way, and you don't really need the typed functionality coming with a typed list. After all, a typed list requires you to go into the designer, create the list and re-generate the code. Dynamic lists are based on entity fields, using the similar code as TypedList classes use internally. These lists are loaded into DataTable objects. The following example shows you how to create such a dynamic list. The example uses aggregates and a GroupByCollection to read a custom resultset into a DataTable, fully build with entity fields. Because you can use the same logic Typed Lists use internally, fetching the data is simply calling a method in a class which is already available.
C# VB.NET

// C# DataAccessAdapter adapter = new DataAccessAdapter(); ResultsetFields fields = new ResultsetFields(3); fields.DefineField(EmployeeFields.FirstName, 0, "FirstNameManager", "Manager"); fields.DefineField(EmployeeFields.LastName, 1, "LastNameManager", "Manager"); fields.DefineField(EmployeeFields.LastName, 2, "AmountEmployees", "Employee", AggregateF unction.Count); IRelationPredicateBucket bucket = new RelationPredicateBucket(); bucket.Relations.Add(EmployeeEntity.Relations.EmployeeEntityUsingEmployeeId, "Employee", "Manager", JoinHint.None); IGroupByCollection groupByClause = new GroupByCollection(); groupByClause.Add(fields[0]); groupByClause.Add(fields[1]); DataTable dynamicList = new DataTable(); adapter.FetchTypedList(fields, dynamicList, bucket, 0, null, true, groupByClause);

Page 215

' VB.NET Dim adapter As New DataAccessAdapter() Dim fields As New ResultsetFields(3) fields.DefineField(EmployeeFields.FirstName, 0, "FirstNameManager", "Manager") fields.DefineField(EmployeeFields.LastName, 1, "LastNameManager", "Manager") fields.DefineField(EmployeeFields.LastName, 2, "AmountEmployees", "Employee", AggregateF unction.Count) Dim bucket As IRelationPredicateBucket = New RelationPredicateBucket() bucket.Relations.Add(EmployeeEntity.Relations.EmployeeEntityUsingEmployeeId, "Employee", "Manager", JoinHint.None) Dim groupByClause As IGroupByCollection = New GroupByCollection() groupByClause.Add(fields(0)) groupByClause.Add(fields(1)) Dim dynamicList As New DataTable() adapter.FetchTypedList(fields, dynamicList, bucket, 0, Nothing, True, groupByClause) This list retrieves all managers and the number of employees they manage. Let's walk through the example to make it more understandable. It first creates a list of fields which will form the list. ResultsetFields is a class defined in the HelperClasses namespace in your generated code (database generic project) and which is a class derived from EntityFields2, the container for EntityField2 objects which is also located in every entity: the Fields property. The three lines following the declaration of the fields parameter define the three fields in detail. First, it specifies an entity field, to signal which field we want on that position of the resultset fields, then the index of the field in the resultsetfields object, then the alias for the field in the resultset and optionally (but we join Employee twice so we have to alias) the alias for the entity this field belongs to. The third field is actually the same as the second, Employee.LastName, however has an aggregate function applied to it. LastName is not a numeric field, but the type of the field is not important when an aggregate function is applied, as the field defines a column in the dynamic list and is used as a parameter for the aggregate function; the aggregate function itself, or better the value it produces, is the actual value of the column and the type is determined at runtime. As a DataColumn object can contain any value, this works as planned. As we have to join Employee twice, we have to define a relation collection and add the relation required for the join. The entities in the relation are properly aliased as "Employee" and "Manager" so the generated code knows from which table the fields should be retrieved. As we're going to group by, we define the group by collection and add the fields which participate in the group by in the order in which we want to group. We don't add the third field, as it is an aggregated field which is using the grouped data. After that, the objects are setup to retrieve the data. We use the adapter's FetchTypedList method, as that method is capable of fetching a set of data in a DataTable object. We could have specified a filter as well, additional relations for the filter, and even paging parameters. This way of creating lists of data is very flexible and can be easily extended with expressions for complex resultsets, for example for usage in reports.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 216

Generated code - Using GROUP BY and HAVING clauses, Adapter
Preface
This section discusses the usage of GROUP BY and HAVING clauses with typed lists, typed views and dynamic lists. Usage is the same for all three.

Using GroupByCollection and Having Clauses
Data in lists, be it a TypedView, a TypedList or a Dynamic List, is often grouped into smaller sets, which are then processed by Aggregate functions. LLBLGen Pro allows you to specify a GroupByCollection when calling DataAccessAdapter.FetchTypedList() or DataAccessAdapter.FetchTypedView. The GroupByCollection can contain a custom filter which will be used as a HAVING clause in the query to generate. The filter is a normal PredicateExpression and can contain any predicate you would otherwise use in a normal filter, with one restriction: fields referred to in a Having clause have to be part of the GroupByCollection or have to have an aggregate function applied to them. This is an Sql restriction. For more information about aggregate functions, please see Field expressions and aggregates . To make effective use of a group by action, fields in the resultset should be in the GroupByCollection or have an aggregate function applied to them. You can apply aggregate functions in the TypedList editor, and also in your code. The example below applies aggregates and expressions to the fields in a TypedList called OrderTotals. The TypedList contains just two fields: OrderDetails.OrderId (aliased as "OrderId" at index 0) and OrderDetails.UnitPrice (aliased as "TotalPrice", as it will contain the total price of the order, at index 1). By applying expressions, aggregates and a group by action, together with a having clause, the typed list will contain all orders with a total price higher than $1,000.=. The data also could have been read using a dynamic list, as is illustrated later in this section. By assigning the expression and aggregate function to the field object which is used in both the fields list for fetching and the having clause, the expression and aggregate are used in both the select list and the having clause.
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# DataAccessAdapter adapter = new DataAccessAdapter(); OrderTotalsTypedList orderTotals = new OrderTotalsTypedList(); IGroupByCollection groupByClause = new GroupByCollection(); // grab the fields collection. IEntityFields2 fields = orderTotals.GetFieldsInfo(); groupByClause.Add(fields[0]); // construct Having filter. // Expression for total price: ((unitprice * quantity) - ((unitprice * quantity) * disco unt) ) groupByClause.HavingClause = new PredicateExpression( fields[1] .SetExpression( (OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) ((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) * OrderDetailsFields.Di scount)) .SetAggregateFunction(AggregateFunction.Sum) > 1000.0f); adapter.FetchTypedList(fields, orderTotals, orderTotals.GetRelationInfo(), 0, null, true

Page 217

, groupByClause); ' VB.NET .NET 1.x Dim adapter As New DataAccessAdapter() Dim orderTotals As New OrderTotalsTypedList() ' grab the fields collection. Dim fields As IEntityFields2 = orderTotals.GetFieldsInfo() ' Expression for total price: ((unitprice * quantity) - ((unitprice * quantity) * discou nt) ) Dim productPriceExpression As IExpression = New Expression( _ OrderDetailsFields.UnitPrice, ExOp.Mul, OrderDetailsFields.Quantity) Dim discountExpression As IExpression = New Expression( _ productPriceExpression, ExOp.Mul, OrderDetailsFields.Discount) Dim totalPriceExpression As IExpression = New Expression( _ productPriceExpression, ExOp.Sub, discountExpression) fields(1).ExpressionToApply = totalPriceExpression fields(1).AggregateFunctionToApply = AggregateFunction.Sum Dim groupByClause As IGroupByCollection = New GroupByCollection() groupByClause.Add(fields(0)) Dim havingFilter As IPredicateExpression = New PredicateExpression() havingFilter.Add(New FieldCompareValuePredicate(fields(1), Nothing, ComparisonOperator.G reaterThan, 1000.0F)) groupByClause.HavingClause = havingFilter adapter.FetchTypedList(fields, orderTotals, orderTotals.GetRelationInfo(), 0, Nothing, T rue, groupByClause)

' VB.NET .NET 2.0 Dim adapter As New DataAccessAdapter() Dim orderTotals As New OrderTotalsTypedList() Dim groupByClause As IGroupByCollection = New GroupByCollection() ' grab the fields collection. Dim fields As IEntityFields2 = orderTotals.GetFieldsInfo() groupByClause.Add(fields(0)) ' construct Having filter. ' Expression for total price: ((unitprice * quantity) - ((unitprice * quantity) * discou nt) ) groupByClause.HavingClause = New PredicateExpression( _ fields(1) _ .SetExpression( _ (OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) - _ ((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) * OrderDetailsFields.Di scount)) _ .SetAggregateFunction(AggregateFunction.Sum) _ > 1000.0) adapter.FetchTypedList(fields, orderTotals, orderTotals.GetRelationInfo(), 0, Nothing, T rue, groupByClause) The main part is the creation of the Expression object which will calculate the proper total for an order. As expression objects are re-usable objects, the code might look a little verbose, but can be re-used in your application. The GroupByCollection's HavingClause is set to a FieldCompareValuePredicate object which compares the TotalPrice value with a value of 1000.0. As the field used in the predicate is the same as the field in the resultset, we get the proper expression and aggregate function applied to the field in the Having clause, as the Total price has to be re-calculated in the Having clause.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 218

Generated code - Calling a stored procedure, Adapter
Preface
LLBLGen Pro supports existing stored procedures by offering the ability to define calls to those existing stored procedures. There are two types of stored procedures: procedures which do not return a resultset, called Action Stored Procedures, and procedures which return one or more resultsets, which are called Retrieval Stored Procedures. This section illustrates how call definitions to these stored procedures in your project are generated in code and how you can use them in your code. Each stored procedure call method has one overloaded method which has an extra ref/ByRef parameter, returnValue, which returns the return value of the stored procedure, if that's supported by the target database. (SqlServer for example, supports this). Classes with stored procedure calls are stored in the database specific VS.NET project. This is by design.

Retrieval Stored Procedure Calls
When you add a call definition for a retrieval stored procedure, a static/shared method that will call that stored procedure will be added to a class called RetrievalProcedures. When the stored procedure called returns a single resultset (which is the most common approach with SqlServer, with Oracle this would be a stored procedure/function with a single REF CURSOR output parameter), the return value of the generated method will be a DataTable. When the stored procedure returns more than one resultset, the return value of the generated method will be a DataSet, containing each resultset in a separate DataTable. For example, if we include a call definition into our LLBLGen Pro project to the procedure in Northwind called 'CustOrderDetail', taking one parameter, an OrderID, a static method called CustOrderDetail is created, returning a DataTable (because the procedure returns a single resultset) and accepting a single parameter, orderID, which is of type int/Integer because the parameter itself is of type integer. To utilize this method in your own code, you can call it in the following way. Let's pass in the orderID 10254 as parameter value:
C# VB.NET

// [C#] DataTable resultSet = RetrievalProcedures.CustOrderDetail(10254); ' [VB.NET] Dim resultSet As DataTable = RetrievalProcedures.CustOrderDetail(10254) Nothing more is needed. This one line of code will pass 10254 as value to the parameter of the stored procedure CustOrderDetail and will return the result in a DataTable object. When a stored procedure has more than one parameter, all parameters are specified as input parameters for the method calling the stored procedure. Because the stored procedure call methods are located in the database specific project, they will create a new DataAccessAdapter object if not such an object is supplied, which is the case in our example above. If you want to use an existing DataAccessAdapter, for example because you want the stored procedure to run inside an existing transaction, you can specify that existing adapter in the method call as an extra parameter. Output parameters are also supported. When a stored procedure has an output parameter, a parameter representing the output parameter in the stored procedure is added to the method heading and is defined as 'ref' (C#) or 'ByRef' (VB.NET). Illustrated below is the call to an imaginary stored procedure which returns a datatable, takes 4 input parameters and returns a value in an output parameter:
C# VB.NET

Page 219

// [C#] int outputValue; DataTable resultSet = RetrievalProcedures.MyStoredProcedure(1, 2, 3, 4, ref outputValue) ; ' [VB.NET] Dim outputValue as Integer Dim resultSet As DataTable = RetrievalProcedures.MyStoredProcedure(1, 2, 3, 4, ByRef out putValue)

Action Stored Procedure Calls
If you have added a call to a procedure to your project, which does not return a resultset, the static/shared method is added to the class ActionStoredProcedures. Instead of returning a DataTable or DataSet, a method in this class returns an int/Integer, which represents the return value of the ExecuteNonQuery() method, which is the number of rows affected if the database has row counting enabled (and the stored procedure doesn't switch it off). Otherwise the action stored procedure methods work the same as the retrieval stored procedures mentioned above: input parameters are defined as normal parameters for the method and output parameters are defined as ref/ByRef parameters.

Wrap call in IRetrievalQuery object
In v2.0, LLBLGen Pro offers you to get the call to a retrieval stored procedure as an IRetrievalQuery object. An IRetrievalQuery object is the query object generated by a Dynamic Query Engine (DQE) and which is executed by the low level fetch logic of LLBLGen Pro's O/R mapper core. The IRetrievalQuery object allows you to fetch a query as a datareader or to project the results of the stored procedure call onto a datastructure of your choice, for example an entity collection. You retrieve an IRetrievalQuery object which wraps the call to a given stored procedure by calling the following generated method (each retrieval stored procedure has such a method generated):
C# VB.NET

// C# IRetrievalQuery procCall = RetrievalProcedures.GetStoredProcedureCallNameCallAsQuery(par ameters); ' VB.NET Dim procCall As IRetrievalQuery = RetrievalProcedures.GetStoredProcedureCallNameCallAsQu ery(parameters) You can then pass the IRetrievalQuery object to the methods for fetching a datareader or fetch a projection. See for more information about fetching a datareader or fetching a projection: LLBLGen Pro - Fetching DataReaders and projections .
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 220

Generated code - Fetching DataReaders and projections, Adapter
Preface
LLBLGen Pro v2 introduces two new ways of fetching a resultset: as an open IDataReader object and as a projection . This section discusses both and illustrates both with a couple of examples, either using a stored procedure or a query build using entity fields. Fetching a resultset as an open IDataReader is considered an advanced feature and should be used with care: an open IDataReader object represents an open cursor to data on a connected RDBMS over an open connection. This means that passing the IDataReader around in your application is not recommended. Instead use the IDataReader in the routine you also called the fetch logic to create it and immediately after that make sure the IDataReader gets closed and disposed. This way you're sure you'll free up resources early. To understand projections better, it's recommended to first read the section about fetching an open IDataReader. Another section describing projections, but then related to an entity view object, is Generated code - using the EntityView2 class .

Fetching a resultset as an open IDataReader
To fetch a resultset as an open IDataReader, you call one of the overloads of FetchDataReader , a method of DataAccessAdapter . There are two ways to use the FetchDataReader method: by supplying a ready to use IRetrievalQuery or by specifying a fields list, and various other elements which are required for creating a new query by the Dynamic Query Engine (DQE). The first option, the IRetrievalQuery option, can be used to fetch a retrieval stored procedure as an open IDataReader, by using the RetrievalProcedures.GetStoredProcedureName CallAsQuery() method of the particular stored procedure call. This is a generated method, one for every retrieval stored procedure call known in the LLBLGen Pro project. FetchDataReader accepts also a parameter called CommandBehavior . This parameter is very important as it controls the behavior the datareader should perform when the datareader is closed. It's only required to specify a behavior different than CloseConnection if the fetch is inside a transaction and the connection has to stay open after the datareader has been closed. It's recommended to familiar yourself with the various overloads of the FetchDataReader method using the LLBLGen Pro reference manual. The method is defined on DataAccessAdapterBase, the base class of every generated DataAccessAdapter class. It's possible to construct your own IRetrievalQuery object with your own SQL, by instantiating a new RetrievalQuery object. However in general, it's recommended to use the FetchDataReader overloads which accept a fieldslist and other elements and let LLBLGen Pro generate the query for you.

Fetching a Retrieval Stored Procedure as an IDataReader
An example of calling a procedure and receive a datareader from it is enlisted below. It calls the Northwind stored procedure CustOrdersOrders which returns a single resultset with 4 fields. The example simply prints the output on the console. The VB.NET example uses a try / finally block as VB.NET for .NET 1.x doesn't support the Using statement. Users of VB.NET for .NET 2.0 can replace the try/finally block with the new Using statement as illustrated in the C# example.
C# VB.NET

// C# using( DataAccessAdapter adapter = new DataAccessAdapter() ) {

Page 221

IDataReader reader = adapter.FetchDataReader( RetrievalProcedures.GetCustOrdersOrdersCallAsQuery( "CHOPS" ), CommandBehavior.CloseConnection ); while( reader.Read() ) { Console.WriteLine( "Row: {0} | {1} | {2} | {3} |", reader.GetValue( 0 ), reader.GetValue( 1 ), reader.GetValue( 2 ), reader.GetValue( 3 ) ); } // close reader, will also close connection reader.Close(); } ' VB.NET Dim adapter As New DataAccessAdapter() Try Dim reader as IDataReader = adapter.FetchDataReader( _ RetrievalProcedures.GetCustOrdersOrdersCallAsQuery( "CHOPS" ), _ CommandBehavior.CloseConnection ) While reader.Read() Console.WriteLine( "Row: {0} | {1} | {2} | {3} |", reader.GetValue( 0 ), _ reader.GetValue( 1 ), reader.GetValue( 2 ), reader.GetValue( 3 ) ) End While ' close reader, will also close connection reader.Close() Finally ' Connection is closed when the reader was closed adapter.Dispose() End Try

Fetching a Dynamic List as an IDataReader
An example of a dynamic list which is used to receive a datareader from it is enlisted below. The example simply prints the output on the console.
C# VB.NET

// C# using( DataAccessAdapter adapter = new DataAccessAdapter() ) { ResultsetFields fields = new ResultsetFields( 3 ); // simply set the fields in the indexes, which will use the field name for the column n ame fields[0] = CustomerFields.CustomerId; fields[1] = CustomerFields.CompanyName; fields[2] = OrderFields.OrderId; RelationPredicateBucket filter = new RelationPredicateBucket(CustomerFields.Country == "Germany"); filter.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerId); IDataReader reader = adapter.FetchDataReader( fields, filter, CommandBehavior.CloseConn ection, 0, true ); while( reader.Read() ) { Console.WriteLine( "Row: {0} | {1} | {2} |", reader.GetValue( 0 ), reader.GetValue( 1 ), reader.GetValue( 2 ) ); } reader.Close(); }

Page 222

' VB.NET Dim adapter As New DataAccessAdapter() Try Dim fields As New ResultsetFields( 3 ) ' simply set the fields in the indexes, which will use the field name for the column na me fields(0) = CustomerFields.CustomerId fields(1) = CustomerFields.CompanyName fields(2) = OrderFields.OrderId Dim filter As New RelationPredicateBucket() filter.PredicateExpression.Add( _ New FieldCompareValuePredicate(CustomerFields.Country, Nothing, ComparisionOperator.Eq ual, "Germany")) filter.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerId) Dim reader As IDataReader = adapter.FetchDataReader( fields, filter, CommandBehavior.Cl oseConnection, 0, True ) While reader.Read() Console.WriteLine( "Row: {0} | {1} | {2} |", _ reader.GetValue( 0 ), reader.GetValue( 1 ), reader.GetValue( 2 ) ) End While reader.Close() Finally adapter.Dispose() End Try

Resultset projections
In the previous section we've seen that a query could be fetched as an open IDataReader, where the query could be an IRetrievalQuery object containing a stored procedure call, or a dynamic formulated query from fields, a filter and other elements you might want to use in the query. It is then up to you what to do with the IDataReader. It's likely you'll project the data available to you through the IDataReader object onto a data-structure. Projecting a resultset is a term from the relational algebra, the Wikipedia has a formal explanation of it: Projection (relational algebra) (opens in a new window). It comes down to the fact that you create a new set of data from an existing set of data. The existing set of data is the resultset you want to project. The new set is the projection result. LLBLGen Pro offers two different projection mechanisms: projecting an EntityView2 (see: Generated code using the EntityView2 class ) and projecting a fetched resultset, which is discussed here. Both mechanisms are roughly the same, only the source data origin differs and the used interface implemented by the used projection engine. The projections of entity view data are a little more advanced because it's possible to execute in-memory filters on the entity object itself to make a selection which field to project. This means that the projector objects, as discussed in the EntityView2 Projection documentation, are both implementing the IDataValueProjector, but the projector objects used for EntityView2 projections also implement the interface derived from IDataValueProjector, IEntityPropertyProjector. For projections of EntityView2 data, EntityPropertyProjector objects are used, for projections of resultset data, the more simpler DataValueProjector objects are used. Their meaning is roughly the same, so if you're familiar with EntityView2 projections, you'll directly understand the examples below using DataValueProjector objects. As the projection engine interfaces required for both mechanisms are fairly similar, the shipped projection engines thereby can be used for both mechanisms. Resultset projections are done by an IGeneralDataProjector implementation. IGeneralDataProjector allows an object[] array of values to be projected onto new instances of whatever class is supported by the IGeneralDataProjector implementation, for example new entities or a DataRow in a DataTable. Which values in the object[] array are projected onto which properties of the target element, created by the IGeneralDataProjector implementation, is specified by the specified set of IDataValueProjector implementations passed in. In the following examples you'll see the usage of the projection engines is similar to the usage of the projection engines in the EntityView2 projection examples. In Adapter, the DataAccessAdapter class has a method called FetchProjection with various overloads. This method produces the projection of the resultset defined by the input parameters (similar to the FetchDataReader method) or the resultset passed in in the form of an open IDataReader object. By which projection engine the projection is performed as well which data is projected is passed in as well.

Page 223

FetchProjection doesn't return a value, the result is in the projection engine object. This method has similar overloads as FetchDataReader, though it doesn't accept a commandbehavior: if a connection is open, it leaves it open, if no connection is open, it creates one and closes one afterwards.

Projecting Stored Procedure resultset onto entity collection
For this stored procedure projection example, the following stored proecdure is used: CREATE procedure pr_CustomersAndOrdersOnCountry @country VARCHAR(50) AS SELECT * FROM Customers WHERE Country = @country SELECT * FROM Orders WHERE CustomerID IN ( SELECT CustomerID FROM Customers WHERE Country = @country ) which is a SqlServer stored procedure and which returns 2 resultsets: the first is all customers filtered on a given Country, and the second is all orders of those filtered customers. The stored procedure is fetched as an open IDataReader and both resultsets are projected onto entity collections: the first resultset on an EntityCollection object with CustomerEntity instances and the second on an EntityCollection of OrderEntity instances. The stored procedure uses a wildcard select list. This is for simplicity. The code below is written using .NET 1.x for clarity. .NET 2.0 users are encouraged to use the generic variants of the discussed classes instead, as discussed also in Generated code - using the EntityView2 class .
C# VB.NET

// C# EntityCollection customers = new EntityCollection( new CustomerEntityFactory() ); EntityCollection orders = new EntityCollection( new OrderEntityFactory() ); using(IRetrievalQuery query = RetrievalProcedures.GetCustomersAndOrdersOnCountryCallAsQu ery( "Germany" )) { using(DataAccessAdapter adapter = new DataAccessAdapter()) { using(IDataReader reader = adapter.FetchDataReader(query, CommandBehavior.CloseConnect ion)) { // first resultset: Customers. List<IDataValueProjector> valueProjectors = new List<IDataValueProjector>(); // project value on index 0 in resultset row onto CustomerId valueProjectors.Add( new DataValueProjector( CustomerFieldIndex.CustomerId.ToString() , 0, typeof( string ) ) ); // project value on index 1 in resultset row onto CompanyName valueProjectors.Add( new DataValueProjector( CustomerFieldIndex.CompanyName.ToString( ), 1, typeof( string ) ) ); // resultset contains more rows, we just project those 2. The rest is trivial. DataProjectorToIEntityCollection2 projector = new DataProjectorToIEntityCollection2( customers ); adapter.FetchProjection( valueProjectors, projector, reader ); // second resultset: Orders. valueProjectors = new ArrayList(); valueProjectors.Add( new DataValueProjector( OrderFieldIndex.OrderId.ToString(), 0, t ypeof( int ) ) ); valueProjectors.Add( new DataValueProjector( OrderFieldIndex.CustomerId.ToString(), 1 , typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( OrderFieldIndex.OrderDate.ToString(), 3,

Page 224

typeof( DateTime ) ) ); // switch to the next resultset in the datareader reader.NextResult(); projector = new DataProjectorToIEntityCollection2( orders ); adapter.FetchProjection( valueProjectors, projector, reader ); reader.Close(); } } } ' VB.NET Dim customers As New EntityCollection( New CustomerEntityFactory() ) Dim orders As New EntityCollection( New OrderEntityFactory() ) Dim query As IRetrievalQuery = RetrievalProcedures.GetCustomersAndOrdersOnCountryCallAsQ uery( "Germany" ) Try Dim adapter As New DataAccessAdapter() Try Dim reader As IDataReader = adapter.FetchDataReader(query, CommandBehavior.CloseConnec tion)) Try ' first resultset: Customers. Dim valueProjectors As New List(Of IDataValueProjector)() ' project value on index 0 in resultset row onto CustomerId valueProjectors.Add( New DataValueProjector( CustomerFieldIndex.CustomerId.ToString() , 0, GetType( Sring ) ) ) ' project value on index 1 in resultset row onto CompanyName valueProjectors.Add( New DataValueProjector( CustomerFieldIndex.CompanyName.ToString( ), 1, GetType( String ) ) ) ' resultset contains more rows, we just project those 2. The rest is trivial. Dim projector As New DataProjectorToIEntityCollection2( customers ) adapter.FetchProjection( valueProjectors, projector, reader ) ' second resultset: Orders. valueProjectors = New ArrayList() valueProjectors.Add( New DataValueProjector( OrderFieldIndex.OrderId.ToString(), 0, G etType( Integer ) ) ) valueProjectors.Add( New DataValueProjector( OrderFieldIndex.CustomerId.ToString(), 1 , GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( OrderFieldIndex.OrderDate.ToString(), 3, GetType( DateTime ) ) ) ' switch to the next resultset in the datareader reader.NextResult() projector = New DataProjectorToIEntityCollection2( orders ) adapter.FetchProjection( valueProjectors, projector, reader ) reader.Close() Finally reader.Dispose() End Try Finally adapter.Dispose() End Try Finally ' Not really necessary for SqlServer, but is required on Oracle, so it's mentioned here ' for completeness. query.Dispose() End Try

Projecting Dynamic List resultset onto custom classes

Page 225

We can go one step further and create a fetch of a dynamic list and fill a list of custom class instances, for example for transportation by a Webservice and you want lightweight Data Transfer Objects (DTO). The projecting a resultset onto custom classes is .NET 2.0 only, as the projection engine uses generics. Of course, you can write your own implementation of IGeneralDataProjector which performs class instantiation and property setting using reflection on .NET 1.x
C# .NET 2.0 VB.NET .NET 2.0

// C# .NET 2.0 List<CustomCustomer> customClasses = new List<CustomCustomer>(); ResultsetFields fields = new ResultsetFields( 4 ); fields[0] = CustomerFields.City; fields[1] = CustomerFields.CompanyName; fields[2] = CustomerFields.CustomerId; fields[3] = CustomerFields.Country; DataProjectorToCustomClass<CustomCustomer> projector = new DataProjectorToCustomClass<CustomCustomer>( customClasses ); // Define the projections of the fields. List<IDataValueProjector> valueProjectors = new List<IDataValueProjector>(); valueProjectors.Add( new DataValueProjector( "City", 0, typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( "CompanyName", 1, typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( "CustomerID", 2, typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( "Country", 3, typeof( string ) ) ); // perform the fetch combined with the projection. using( DataAccessAdapter adapter = new DataAccessAdapter() ) { adapter.FetchProjection( valueProjectors, projector, fields, null, 0, true ); } ' VB.NET .NET 2.0 Dim customClasses As New List(Of CustomCustomer)() Dim fields As New ResultsetFields( 4 ) fields(0) = CustomerFields.City fields(1) = CustomerFields.CompanyName fields(2) = CustomerFields.CustomerId fields(3) = CustomerFields.Country Dim projector As New DataProjectorToCustomClass(Of CustomCustomer)( customClasses ) ' Define the projections of the fields. Dim valueProjectors As New List(Of IDataValueProjector)() valueProjectors.Add( New DataValueProjector( "City", 0, GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( "CompanyName", 1, GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( "CustomerID", 2, GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( "Country", 3, GetType( String ) ) ) ' perform the fetch combined with the projection. Using adapter As New DataAccessAdapter() adapter.FetchProjection( valueProjectors, projector, fields, Nothing, 0, True ) End Using Where the custom class is: public class CustomCustomer { #region Class Member Declarations private string _customerID, _companyName, _city, _country;

Page 226

#endregion public CustomCustomer() { _city = string.Empty; _companyName = string.Empty; _customerID = string.Empty; _country = string.Empty; } #region Class Property Declarations public string CustomerID { get { return _customerID; } set { _customerID = value; } } public string City { get { return _city; } set { _city = value; } } public string CompanyName { get { return _companyName; } set { _companyName = value; } } public string Country { get { return _country; } set { _country = value; } } #endregion }
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 227

Generated code - Getting started with filtering, Adapter
Preface
One of the most powerful aspects of the generated code and the framework it forms is the ability to formulate filters and sortclauses directly in your code and let the code evaluate them at runtime. This means that once the framework has been generated, developers working on business logic code can formulate specific filters to request only that information necessary for the task they're currently working on, without the requirement of a given filter in a special stored procedure. When filters and sort clauses are used to fetch data from the persistent storage (database), the filters and sort clauses are transformed to SQL and embedded into the actual SQL query by the used Dynamic Query Engine (see Dynamic SQL ) and filters are fully parameterized, thus execution plans are preserved in the database server's optimizer while the filters are not constructed with values concatenated into the SQL query itself, so no risks for SQL injection attacks. When filters and sort clauses are used in combination of EntityView2 objects (See: Generated code - using the EntityView2 class ) they're interpreted in-memory and not converted to SQL. Not all filter constructs available to you for usage with a database are available to you when you're filtering data in-memory. When a predicate class is usable for in-memory usage, it's mentioned with the predicate class. In-memory filtering doesn't use relations , just predicates. This section describes the different ways of constructing filters using predicates and predicate expressions and how to use multi-entity spanning filters as well, using RelationCollection objects. It furthermore tells you how to construct sort clauses to sort the data you requested. The generated code contains factory classes for most predicates and all sort clauses to ease the use of creating the objects. Adapter uses RelationPredicateBucket objects to combine a RelationCollection object and a PredicateExpression object in a single object, instead of the SelfServicing approach of having these separated. All methods of the DataAccessAdapter object which accept a filter, do this by accepting a RelationPredicateBucket object. The first subsection describes in an abstract way the philosophy behind the predicate objects. This abstract discussion might look a little complex at first but it describes the way predicates can and should be organized into predicate expressions and when to do that to get the results you want. All predicate construction methods of LLBLGen Pro are compile time checked . This means that if you for example rename a field in the designer, regenerate your code, and recompile your projects, you'll be notified by the compiler where you used the old name and thus which lines you have to update. This is crucial for reliable software development.

Upgrading from v1.0.200x.y: no PredicateFactory
In previous versions of LLBLGen Pro, v1.0.2005.1 and earlier, by default a class called PredicateFactory was generated. This class contained for most predicate classes a convenient construction method for each field in each entity. In larger projects however this lead to a very big class which was unusable in VS.NET due to the high number of overloads of a single method. In v2.0 of LLBLGen Pro this class is no longer generated by default and is discouraged to be used in your code. You can still generate this class however, simply enable to PredicateFactory generation task in the run queue of your preset of choice (See: Designer Generating code ). This documentation will avoid the usage of the PredicateFactory class, unless stated otherwise. If you need information about the PredicateFactory class, please consult the documentation of v1.0.2005.1, still available at our website.

Predicates and Predicate expressions

Page 228

Filtering is the same for entities, typed views and typed lists. To construct a predicate expression add predicate and PredicateExpression objects to the PredicateExpression object exposed by the RelationPredicateBucket class. You can then pass the PredicateExpression object as a parameter to a method which retrieves data or works on data. A predicate is effectively a clause used in a WHERE statement which will result in True or False, 'WHERE' itself is not part of the predicate. Predicates can be grouped in a predicate expression. Predicate expressions can also be grouped inside other predicate expressions. Predicates are placed inside a predicate expression with the operators 'And' and 'Or'. Predicate expressions can also be placed inside another predicate expression with the operators 'And' and 'Or'. This might sound a little complex, so let's illustrate with an example of a nested WHERE clause with some predicate expressions.

... Some Select statement WHERE ( Table1.Foo = @param1 AND Table1.Bar = @param2 ) OR Table2.Bar2 = @param3

(Table1.Foo = @param1 AND Table1.Bar = @param2) OR Table2.Bar2 = @param3. The following predicates are found in this filter: Table1.Foo = @param1 Table1.Bar = @param2 Table2.Bar2 = @param3 There are 2 predicate expressions found: A. (Table1.Foo = @param1 AND Table1.Bar = @param2) B. (Table1.Foo = @param1 AND Table1.Bar = @param2) OR Table2.Bar2 = @param3 To formulate the filter correctly, we start by constructing an empty RelationPredicateBucket instance, B, which has a new, empty PredicateExpression object. Let's assume param1 has the value "One", param2 has the value "Two" and param3 has the value "Three".
C# VB.NET

// [C#] RelationPredicateBucket B = new RelationPredicateBucket(); ' [VB.NET] Dim B As New RelationPredicateBucket() The easiest way to proceed is then to construct predicate expression A:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// [C#] IPredicateExpression A = new PredicateExpression(); A.Add(Table1Fields.Foo == "One"); A.AddWithAnd(Table1Fields.Bar == "Two"); ' [VB.NET] .NET 1.x Dim A As IPredicateExpression = New PredicateExpression() A.Add(New FieldCompareValuePredicate(Table1Fields.Foo, ComparisonOperator.Equal, "One"))

Page 229

A.AddWithAnd(New FieldCompareValuePredicate(Table1Fields.Bar, ComparisonOperator.Equal, "Two")) ' [VB.NET] .NET 2.0 Dim A As IPredicateExpression = New PredicateExpression() A.Add(Table1Fields.Foo = "One") A.AddWithAnd(Table1Fields.Bar = "Two") A is now constructed and we can add this predicate expression as a single predicate to the predicate expression B:
C# VB.NET

// [C#] B.PredicateExpression.Add(A); ' [VB.NET] B.PredicateExpression.Add(A) There is one predicate left, OR Table2.Bar2 = @param3 . Let's add that one with the Or operator directly to B:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// [C#] B.AddWithOr(Table2Fields.Bar2 == "Three"); ' [VB.NET] .NET 1.x B.AddWithOr(New FieldCompareValuePredicate(Table2Fields.Bar2, ComparisonOperator.Equal, "Three")) ' [VB.NET] .NET 2.0 B.AddWithOr(Table2Fields.Bar2 = "Three") B now has been filled with the complete filter. To sum it up, below are the complete sections of code to construct the complete predicate expression
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// [C#] RelationPredicateBucket B = new RelationPredicateBucket(); IPredicateExpression A = new PredicateExpression(); A.Add(Table1Fields.Foo == "One"); A.AddWithAnd(Table1Fields.Bar == "Two"); B.PredicateExpression.Add(A); B.PredicateExpression.AddWithOr(Table2Fields.Bar2 == "Three"); ' [VB.NET] .NET 1.x Dim B As New RelationPredicateBucket() Dim A As IPredicateExpression = New PredicateExpression() A.Add(New FieldCompareValuePredicate(Table1Fields.Foo, ComparisonOperator.Equal, "One")) A.AddWithAnd(New FieldCompareValuePredicate(Table1Fields.Bar, ComparisonOperator.Equal, "Two")) B.PredicateExpression.Add(A) B.PredicateExpression.AddWithOr(New FieldCompareValuePredicate(Table2Fields.Bar2, Compar isonOperator.Equal, "Three"))

Page 230

' [VB.NET] .NET 2.0 Dim B As New RelationPredicateBucket() Dim A As IPredicateExpression = New PredicateExpression() A.Add(Table1Fields.Foo = "One") A.AddWithAnd(Table1Fields.Bar = "Two") B.PredicateExpression.Add(A) B.PredicateExpression.AddWithOr(Table2Fields.Bar2 = "Three") There is no maximum set for the number of predicate objects you can add to a predicate expression, nor has a maximum been set for the number of predicate expressions you can nest into each other. As a rule of thumb, every set of predicates that should be grouped together as a single boolean expression should be placed in a separate PredicateExpression object: the complete contents of a PredicateExpression object will be placed inside a '()' pair to group the predicates physically in the SQL query.

Creating and working with field objects
The filtering system of LLBLGen Pro uses predicate classes, which use entity field objects (or typed view field objects) to work with. Starting with version 1.0.2005.1, LLBLGen Pro offers a convenient way to produce entity field objects: entityname Fields.FieldName , and typedviewname Fields.FieldName . Example:
C# VB.NET

// C# EntityField2 companyNameField = CustomerFields.CompanyName; ' VB.NET Dim companyNameField As EntityField2 = CustomerFields.CompanyName In earlier versions you needed to use:
C# VB.NET

// C# EntityField2 companyNameField = EntityFieldFactory.Create(CustomerFieldIndex.CompanyName ); ' VB.NET Dim companyNameField As EntityField2 = EntityFieldFactory.Create(CustomerFieldIndex.Comp anyName) To utilize this feature, please add the following code to your code file:
C# VB.NET

// C# using yourrootnamespace.HelperClasses; ' VB.NET Imports yourrootnamespace.HelperClasses In the section The predicate system , filter creation using operator overloading is discussed, which shows how field objects can be utilized together with native C# .NET 1.x/2005 and VB.NET 2005 operators to form predicates.

Setting aliases, expressions and aggregates on fields
To set an aggregate function, an expression (See Field expressions and aggregates ) or an object alias, you can use command chaining by using special methods to set the appropriate property. Typically when you want to set an aggregate function on a field, your code will look like:
C#

Page 231

VB.NET

// C# // create the field EntityField2 companyNameField = CustomerFields.CompanyName; // set the aggregate companyNameField.AggregateFunctionToUse = AggregateFunction.Sum; ' VB.NET ' create the field Dim companyNameField As EntityField2 = CustomerFields.CompanyName ' set the aggregate companyNameField.AggregateFunctionToUse = AggregateFunction.Sum Due to the assignment statement, you can't simply specify a field directly with a predicate class constructor and set the aggregate function, expression or object alias at the same time. However, using the EntityField2 methods SetAggregateFunction(), SetExpression and SetObjectAlias you can. Below an example for a filter to use in a Having clause:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# // SUM(Quantity) > 4 filter IPredicate filter = (OrderDetailsFields.Quantity.SetAggregateFunction(AggregateFunctions .Sum) > 4); ' VB.NET .NET 1.x ' SUM(Quantity) > 4 filter Dim filter As IPredicate = New FieldCompareValuePredicate( _ OrderDetailsFields.Quantity.SetAggregateFunction(AggregateFunctions.Sum), _ Nothing, ComparisonOperator.GreaterThan, 4) ' VB.NET .NET 2.0 ' SUM(Quantity) > 4 filter Dim filter As IPredicate = (OrderDetailsFields.Quantity.SetAggregateFunction(AggregateFu nctions.Sum) > 4) Please note that VB.NET 2002/2003 doesn't support operator overloading and has to use the FieldCompareValuePredicate class for the filter construction.

What to include in a filter
For filtering Typed View objects, you can only filter on one or more fields in the Typed View itself. When you want to filtering a Typed List , you can specify one or more fields which are part of the entities forming the base of the Typed List: you use the field objects of the entities included in the typed list, e.g. CustomerFields.CompanyName. When filtering on entities, using a method to fill an entity collection object, you have two possibilities: You filter on fields in the entity type to retrieve. This is the most commonly used type. Example: if you want to retrieve a set of customer objects, you filter on one or more customer fields. You filter on fields in a related entity of the entity type to retrieve. This is more advanced, an example of this is illustrated above in multi-entity filters In all cases, be sure the field you filter on is in the entity type you want to retrieve or, if you use multi-entity filtering, be sure the field(s) in the filter are in any of the entities mentioned in the RelationCollection of the RelationPredicateBucket used.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 232

Generated code - The predicate system, Adapter
Preface
LLBLGen Pro's filtering capabilities are build using predicate objects , instantiated from predicate classes . This section describes these classes in depth, how to use them, what their purpose is and shows an example for every predicate. To quickly understand which predicate class you need, a handy table is provided below, which should help you converting a WHERE construct in SQL to LLBLGen Pro predicates. These predicate classes are usable in queries on the database and often also in-memory. Below the table a list of predicate classes is given which are solely usable for in-memory filtering. Predicate classes to use for Database or in-memory filtering: SQL Predicate class to use Field BETWEEN 3 AND 5 FieldBetweenPredicate Field BETWEEN field2 AND 4 Field = Field2 FieldCompareExpressionPredicate Field < (Field2 * 4) Field Is NULL FieldCompareNullPredicate Field IN (1, 2, 3, 5) FieldCompareRangePredicate Field IN ( FieldCompareSetPredicate SELECT Field FROM Foo WHERE ...) Field = 3 FieldCompareValuePredicate Field != "Foo" Field LIKE "Foo%" FieldLikePredicate Predicate classes to use for in-memory filtering only: AggregateSetPredicate . This predicate is usable to filter a set of entities based on the result of an aggregate function executed on related entities. Example: all customers which have at least 5 orders. DelegatePredicate . (generic and non-generic variant) This predicate is usable to filter a set of entities based on a function you write yourself. This is the most flexible way to filter entities in memory. MemberPredicate . This predicate is usable to filter a set of entities based on an aspect of a related entity or related set of entities. It's a meta-predicate which applies another predicate onto the related member or members and the result of that (true or false) is used to accept or deny an entity in the set to filter.

The predicate classes
LLBLGen Pro offers you a wide range of different predicate classes to define your filters with. Each predicate class which name starts with Field works on a field, so they all have the form field some expression/operator values . Sometimes this is not enough, you for example want to filter using the predicate (Field * 3) > OtherField. This can be accomplished by adding an expression to the field filtered on. See Field expressions and aggregates for more information about expressions. All predicate classes implement the IPredicate interface and derive from the Predicate class located, as the predicate classes, in the ORMSupportClasses assembly. If you want to add a specific predicate to the pack already offered, you can: implement a class also deriving from Predicate and you're done. To see the full class signatures and their methods, please consult the LLBLGen Pro reference manual's SD.LLBLGen.Pro.ORMSupportClasses namespace. The various examples below utilize the EntityFieldFactory class to create new entity fields to work with the predicate classes. You can also specify existing fields from an entity instance, so these two lines are equivalent:

Page 233

C# VB.NET

// C# IEntityField2 field = EntityFieldFactory.Create(OrderFieldIndex.OrderDate); EntityField2 field = OrderFields.OrderDate; IEntityField2 field = myOrder.Fields[(int)OrderFieldIndex.OrderDate]; ' VB.NET Dim field As IEntityField2 = EntityFieldFactory.Create(OrderFieldIndex.OrderDate) Dim field as EntityField2 = OrderFields.OrderDate Dim field As IEntityField2 = myOrder.Fields(CInt(OrderFieldIndex.OrderDate)) Please note that the entityname Fields.Fieldname constructors return an EntityField2 type, not an IEntityField2 type, to be sure operators work on them (see next section). In-memory filtering and entity properties To filter an EntityView2 you use in-memory filtering using normal predicate classes, be it the Field* classes or the predicate classes for in-memory filtering. To filter the entities accessable through the EntityView2, you can use normal entity fields obtained with the constructs mentioned above. However for filtering on properties which aren't fields, you can't use EntityField2 instances. To use normal predicate classes to filter on these properties, only in-memory, you can use the class EntityProperty . This class behaves like an EntityField2 in predicate objects and Expression objects when they're used for in-memory filtering. An example of EntityProperty in action can be found in Generated code - using the EntityView2 class .

Native language filter construction

Note : The feature discussed in this paragraph is not available for VB.NET 2002/2003 users, because VB.NET 2002/2003 doesn't support operator overloading, and users of VB.NET 2002/2003 have to fall back on the more verbose way to constructing filters. VB.NET 2005 supports operator overloading in full and thus also this feature. C# 2002/2003 supports operator overloading. Already discussed briefly in the section Getting started with filtering is the ability to formulate filters using compact, native language constructs. In the following sub-sections, which each discuss a predicate class, examples will be given how filters can be formulated using the native language constructs supported by LLBLGen Pro. It doesn't stop there, you can also construct sort expressions and predicate expressions using native language constructs. The following paragraphs will show you how this is performed. Please note that the VB.NET examples only work in VB.NET 2005. Constructing Predicate Expressions In the paragraph Predicates and Predicate expressions , you were introduced to the concepts of predicates and predicate expressions. The following examples will show equivalents of the earlier examples in that paragraph to illustrate how to use native language contstructs to create predicate expressions. These predicate expressions are created by the overloaded operators & and | in C# and And and Or in VB.NET 2005. To produce the same full filter as illustrated in Predicates and Predicate expressions , use the following code. It uses a single step, which skips the separate creation of filter A.
C# VB.NET 2005

// C# IPredicateExpression B = ((Table1Fields.Foo == "One") & (Table1Fields.Bar == "Two")) // A | (Table2Fields.Bar2 == "Three"); ' VB.NET 2005 Dim B IPredicateExpression = ((Table1Fields.Foo = "One") And (Table1Fields.Bar = "Two"))

Page 234

_ ' A Or (Table2Fields.Bar2 = "Three") It's also possible to negate a predicate with the native language operator ! (C#) or Not (VB.NET 2005). Same example as previously, but now the last predicate is negated:
C# VB.NET 2005

// C# IPredicateExpression B = ((Table1Fields.Foo == "One") & (Table1Fields.Bar == "Two")) // A | !(Table2Fields.Bar2 == "Three"); ' VB.NET 2005 Dim B IPredicateExpression = ((Table1Fields.Foo = "One") And (Table1Fields.Bar = "Two")) _ ' A Or Not (Table2Fields.Bar2 = "Three") To chain several predicates together into a single predicate expression, you can also consider the AddWithAnd and AddWithOr methods of the PredicateExpression object. Every (predicate Operator predicate) statement results in a PredicateExpression object. If you want to do this: (the example shows values in the WHERE clause, LLBLGen Pro always generates parameters for values, it never includes any value into the query directly)

... WHERE TableFields.Foo = "One" OR TableFields.Foo = "Two" OR TableFields.Foo = "Three" OR TableFIelds.Foo = "Four" you should use this code:
C# VB.NET 2005

// C# IPredicateExpression filter = ((TableFields.Foo=="One") | (TableFields.Foo=="Two")) .AddWithOr(TableFields.Foo=="Three") .AddWithOr(TableFields.Foo=="Four); // which is equal to: IPredicateExpression filter = new PredicateExpression(); filter.Add(TableFields.Foo=="One") .AddWithOr(TableFields.Foo=="Two") .AddWithOr(TableFields.Foo=="Three") .AddWithOr(TableFields.Foo=="Four); // which is equal to: IPredicateExpression filter = new PredicateExpression(); filter.Add(new FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "On e")) .AddWithOr(new FieldCompareValuePredicate(TableFields.Foo, null, ComparisonOperator.Equ al, "Two")) .AddWithOr(new FieldCompareValuePredicate(TableFields.Foo, null, ComparisonOperator.Equ al, "Three")) .AddWithOr(new FieldCompareValuePredicate(TableFields.Foo, null, ComparisonOperator.Equ al, "Four")); ' VB.NET 2005

Page 235

Dim filter As IPredicateExpression = _ ((TableFields.Foo="One") Or (TableFields.Foo="Two")).AddWithOr(TableFields.Foo="Three") .AddWithOr(TableFields.Foo="Four) ' which is equal to: (VB.NET 2005) Dim filter As New PredicateExpression() filter.Add(TableFields.Foo="One").AddWithOr(TableFields.Foo="Two").AddWithOr(TableFields .Foo="Three").AddWithOr(TableFields.Foo="Four) ' which is equal to: (VB.NET 2002/2003/2005) Dim filter As New PredicateExpression() ' NOTE: the following line is specified on multiple lines here for readability, VB.NET w ants you to mention the statements on 1 line filter.Add(New FieldCompareValuePredicate(TableFields.Foo, Nothing, ComparisonOperator.E qual, "One")) _ .AddWithOr(New FieldCompareValuePredicate(TableFields.Foo, Nothing, ComparisonOperator.E qual, "Two")) _ .AddWithOr(New FieldCompareValuePredicate(TableFields.Foo, Nothing, ComparisonOperator.E qual, "Three")) _ .AddWithOr(New FieldCompareValuePredicate(TableFields.Foo, Nothing, ComparisonOperator.E qual, "Four")) Fields and operators Various operators are defined to work with the EntityField2 objects to form filters, or even expressions (See for expressions: Field expressions and aggregates ). The list below will guide you what kind of object is created when you use the specified operator on an EntityField2 object. An example of the usage is given as well. It's recommended you consult the LLBLGen Pro reference manual, located in your LLBLGen Pro installation folder, and check which operators are defined on which classes, for example the operators on SD.LLBLGen.Pro.ORMSupportClasses.EntityField2. For example the '==' / '=' operator (Equality) can produce different predicate object types, depending on the right hand side value: if it's another EntityField2 object or an Expression, a FieldCompareExpressionPredicate is produced, if it's null / Nothing, a FieldCompareNullPredicate is produced. See the reference manual and the predicate classes below for more details. C# + | / VB.NET Descr. 2005 + Or / Addition SortClause construction Division Example (OrderDetailsFields.Quantity + 10) (CustomerFields.CompanyName | SortOperator.Ascending) (OrderDetailsFields.Quantity / 10) (CustomerFields.CompanyName == "Foo Inc.") (OrderDetailsFields.Quantity > 10) Object produced Expression SortClause Expression FieldCompareValuePredicate, FieldCompareExpressionPredicate, FieldCompareRangePredicate, FieldCompareNullPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate, FieldCompareRangePredicate, FieldCompareNullPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate

==

=

Equality

> >=

> >=

Greater Than

Greater Than (OrderDetailsFields.Quantity >= Or Equal 10) Equality (CustomerFields.CompanyName != "Foo Inc.") (OrderDetailsFields.Quantity < 10) (OrderDetailsFields.Quantity <= 10)

!=

<>

< <=

< <=

Lesser Than Lesser Than Or Equal

Page 236

% * -

Mod * -

Like creation Multiplication Substraction

(CustomerFields.CompanyName % "Foo%") (OrderDetailsFields.Quantity * 10)

FieldLikePredicate Expression

(OrderDetailsFields.Quantity Expression 10) Not all predicate classes can be created with the operator overloads, for example FieldBetweenPredicate and FieldCompareSetPredicate aren't constructable using operator overloads like discussed in the previous table. Nevertheless, using the field construction classes and the Set*() methods, it will be easy to construct these predicates as well. Mixing of construction ways Because the combination of an EntityField2 and an operator produces a normal object like a FieldCompareValuePredicate or an Expression object, you can mix the construction ways in your code. So for example, the easy way to construct predicate expressions shown earlier, works also on objects created using the predicate classes directly, as shown in the following example which shows a mix of the two techniques:
C# VB.NET

// C# PredicateExpression filter = (OrderFields.CustomerID == "CHOPS") & (new FieldBetweenPredicate(OrderFields.ShippedDate, dateStart, dateEnd)); ' VB.NET Dim filter As PredicateExpression = (OrderFields.CustomerID = "CHOPS") And _ (New FieldBetweenPredicate(OrderFields.ShippedDate, dateStart, dateEnd))

Predicate classes for database queries or in-memory filtering
FieldBetweenPredicate

Note : The SQL examples given can contain absolute values. The SQL generated by the predicates will never contain absolute values as absolute values will be converted to parameters.

Page 237

Description compares the entity field specified using a BETWEEN operator and results in true if the field's value is greater than or equal to the value of valueBegin and less than or equal to the value of valueEnd. valueBegin and valueEnd can be any value or an EntityField2, as shown in the examples. SQL equivalent Operators Example
C# VB.NET

Field BETWEEN valueStart AND valueEnd Field BETWEEN OtherField AND valueEnd Field BETWEEN valueStart AND OtherField none.

// C# filter.Add(new FieldBetweenPredicate( OrderFields.OrderDate, null, date Start, dateEnd)); // or: ShippingDate BETWEEN RequiredDate and dateEnd filter.Add(new FieldBetweenPredicate( OrderFields.ShippedDate, null, Or derFields.RequiredDate, dateEnd)); ' VB.NET filter.Add(new FieldBetweenPredicate( OrderFields.OrderDate, Nothing, d ateStart, dateEnd)) ' or: ShippingDate BETWEEN RequiredDate and dateEnd filter.Add(new FieldBetweenPredicate( OrderFields.ShippedDate, Nothing, OrderFields.RequiredDate, dateEnd)) Can be used for Yes in-memory filtering

FieldCompareExpressionPredicate
Description compares the entity field specified with the expression specified, using the ComparisonOperator specified. See for a detailed description about expressions the Field expressions and aggregates section SQL equivalent examples Operators Example Field > (OtherField * 2) Field <= OtherField All ComparisonOperator operators: Equal, GreaterEqual, GreaterThan, LessEqual, LesserThan, NotEqual This example creates a predicate which compares Order.OrderDate with Order.ShippingDate. The example might look a little verbose, but Expression objects are re-usable, which allows you to define the Expression objects once and re-use them each time you need them with a predicate class for example. The example illustrates a very basic expression, but the expression you can specify can be very complex. See for more information about expressions Field expressions and aggregates
C# VB.NET

// C# bucket.PredicateExpression.Add(new FieldCompareExpressionPredicate( OrderFields.OrderDate, null, ComparisonOperator.Equal, new Expression(OrderFields.ShippedDate))); // which is equal to: bucket.PredicateExpression.Add((OrderFields.OrderDate == OrderFields.Sh ippedDate));

Page 238

' VB.NET bucket.PredicateExpression.Add(New FieldCompareExpressionPredicate( _ OrderFields.OrderDate, Nothing, ComparisonOperator.Equal, _ New Expression(OrderFields.ShippedDate))) ' which is equal to: (VB.NET 2005) bucket.PredicateExpression.Add((OrderFields.OrderDate = OrderFields.Shi ppedDate)) Example which filters on orders which have been shipped 4 days after the orderdate:
C# VB.NET

// C# bucket.PredicateExpression.Add(new FieldCompareExpressionPredicate( OrderFields.ShippedDate, null, ComparisonOperator.Equal, new Expression(OrderFields.OrderDate, ExOp.Add, 4))); // which is equal to: bucket.PredicateExpression.Add((OrderFields.ShippedDate == (OrderFields .OrderDate + 4))); ' VB.NET bucket.PredicateExpression.Add(New FieldCompareExpressionPredicate( _ OrderFields.ShippedDate, Nothing, ComparisonOperator.Equal, _ New Expression(OrderFields.OrderDate, ExOp.Add, 4))) ' which is equal to: (VB.NET 2005) bucket.PredicateExpression.Add((OrderFields.ShippedDate == (OrderFields .OrderDate + 4))) Can be used for Yes in-memory filtering

FieldCompareNullPredicate
Description compares the entity field specified with NULL. SQL equivalent Operators Example
C# VB.NET

Field IS NULL none.

// C# bucket.PredicateExpression.Add(new FieldCompareNullPredicate(OrderField s.OrderDate, null)); // which is equal to: bucket.PredicateExpression.Add((OrderFields.OrderDate==System.DBNull.Va lue)); ' VB.NET bucket.PredicateExpression.Add(New FieldCompareNullPredicate(OrderField s.OrderDate, Nothing)) ' which is equal to: (VB.NET 2005)

Page 239

bucket.PredicateExpression.Add((OrderFields.OrderDate = System.DBNull.V alue)) Can be used for Yes in-memory filtering

FieldCompareRangePredicate
Description compares the entity field specified with the range of specified values using the IN operator. The range is not a subquery, use FieldCompareSetPredicate for that. The range can be supplied in an ArrayList, in an array or hardcoded in the predicate constructor. SQL equivalent examples Operators Example Field IN (1, 2, 5, 10) Field IN ("Foo", "Bar", "Blah") none. This example creates a predicate which compares Order.EmployeeId with the range 1, 2, 5, stored in an array. The values can also be specified directly in the constructor.
C# VB.NET

// C# int[] values = new int[3] {1, 2, 5}; bucket.PredicateExpression.Add(new FieldCompareRangePredicate( OrderFields.EmployeeId, null, values)); // which is equal to: bucket.PredicateExpression.Add(OrderFields.EmployeeId == values); ' VB.NET Dim values As Integer() = New Integer(2) {1, 2, 5} bucket.PredicateExpression.Add(New FieldCompareRangePredicate( _ OrderFields.EmployeeId, Nothing, values)) ' which is equal to: bucket.PredicateExpression.Add(OrderFields.EmployeeId = values) Can be used for Yes in-memory filtering

FieldCompareSetPredicate

Page 240

Description compares the entity field specified with the set of values defined by the subquery elements, using the SetOperator specified. The FieldCompareSetPredicate is the predicate you'd like to use when you want to compare a field's value with a range of values retrieved from another table / view (or the same table / view) using a subquery. FieldCompareSetPredicates also allows you to define EXISTS () queries. It is then not necessary to specify an IEntityField2 object with the predicate's constructor (specify null / nothing) as it is ignored when building the SQL. Keep in mind that EXISTS() queries are semantically the same as IN queries and IN queries are often simpler to formulate. The FieldCompareSetPredicate supports advanced comparison operators like ANY, ALL and combinations of these with comparison operators like Equal (=) or GreaterThan (>). If the set is just 1 value in size (because you've specified a limit on the number of rows to return), it's wise to use the Equal operator instead of the IN operator as most databases will be rather slow with IN and just 1 value compared to the Equal operator. SQL equivalent examples Operators Field IN (SELECT OtherField FROM OtherTable WHERE Foo=2) EXISTS (SELECT * FROM OtherTable) All SetOperator operators: In, Exists, Equal, EqualAny, EqualAll, LessEqual, LessEqualAny, LessEqualAll, LesserThan, LesserThanAny, LesserThanAll, GreaterEqual, GreaterEqualAny, GreaterEqualAll, GreaterThan, GreaterThanAny, GreaterThanAll, NotEqual, NotEqualAny, NotEqualAll This example illustrates the query: Customer.CustomerID IN (SELECT CustomerID FROM Orders WHERE Employee=2)
C# VB.NET

Example

// C# bucket.PredicateExpression.Add(new FieldCompareSetPredicate( CustomerFields.CustomerID, null, OrderFields.CustomerID, null, SetOperator.In, (OrderFields.EmployeeID == 2))); ' VB.NET bucket.PredicateExpression.Add(New FieldCompareSetPredicate( _ CustomerFields.CustomerID, Nothing, OrderFields.CustomerID, Nothing, _ SetOperator.In, _ New FieldCompareValuePredicate(OrderFieldIndex.EmployeeID, _ Nothing, ComparisonOperator.Equal, 2))) ' which is equal to bucket.PredicateExpression.Add(new FieldCompareSetPredicate( _ CustomerFields.CustomerID, Nothing, OrderFields.CustomerID, Nothing, _ SetOperator.In, (OrderFields.EmployeeID = 2))) Can be used for No in-memory filtering

FieldCompareValuePredicate

Page 241

Description compares the entity field specified with the value specified, using the ComparisonOperator specified. If the value to compare is a string, you will get a case sensitive compare if the database is using a case sensitive collation (like Oracle). You can perform case insensitive compares however, by setting the CaseSensitiveCollation property to true prior to passing the predicate to a fetch method like FetchEntityCollection(). This will perform the UPPERCASE variant of the field with the pattern specified. Please note that if you've set CaseSensitiveCollaction to true, you've to specify your pattern in uppercase as well. SQL equivalent examples Operators Example Field > 3 Field = "Foo" All ComparisonOperator operators: Equal, GreaterEqual, GreaterThan, LessEqual, LesserThan, NotEqual This example creates a predicate which compares Order.EmployeeID with the value 2
C# VB.NET

// C# bucket.PredicateExpression.Add(new FieldCompareValuePredicate( OrderFields.EmployeeID, null, ComparisonOperator.Equal, 2)); // which is equal to: bucket.PredicateExpression.Add(OrderFields.EmployeeID == 2); ' VB.NET bucket.PredicateExpression.Add(New FieldCompareValuePredicate( _ OrderFields.EmployeeID, Nothing, ComparisonOperator.Equal, 2)) ' which is equal to: bucket.PredicateExpression.Add(OrderFields.EmployeeID = 2) Can be Yes used for in-memory filtering

Note :

When using FieldCompareValuePredicate for in-memory filters, like with an EntityView2, and you're using a value which is of a different type than the type of the field, the value to compare with is converted to the type of the field before the comparison. For example, if the field is of type Int64, and you specify as value to compare the value 1, you'll be comparing an Int64 with an Int32, which will result in the conversion of the Int32 value '1' to an Int64 value. If you don't want this conversion to happen, specify the value in the type of the field.

FieldFullTextSearchPredicate (SqlServer specific)

Page 242

Description SqlServer specific . Compares the entity field specified with the pattern specified using the FullTextSearch operator specified. SQL equivalent Operators Example
C# VB.NET

CONTAINS(Field, "Bla") FREETEXT(Field, "Bla") All FullTextSearchOperator operators: Contains, Freetext

// C# bucket.PredicateExpression.Add(new FieldFullTextSearchPredicate( CustomerFields.CompanyName, null, FullTextSearchOperator.Contains, "Solution")); ' VB.NET bucket.PredicateExpression.Add(new FieldFullTextSearchPredicate( _ CustomerFields.CompanyName, Nothing, _ FullTextSearchOperator.Contains, "Solution")) The next example shows a filter on two fields, using the SqlServer 2005 specific feature to accept multiple fields for the same operator.
C# VB.NET

// C# bucket.PredicateExpression.Add(new FieldFullTextSearchPredicate( new IEntityField2[] { CustomerFields.CompanyName, CustomerFields.Conta ctName}, FullTextSearchOperator.Contains, "Solution")); ' VB.NET bucket.PredicateExpression.Add(New FieldFullTextSearchPredicate( _ New IEntityField2() { CustomerFields.CompanyName, CustomerFields.Conta ctName }, _ FullTextSearchOperator.Contains, "Solution")) Can be used for No. Use FieldLikePredicate for in-memory full-text searches, using a regular expression. in-memory filtering

FieldLikePredicate

Page 243

Description compares the entity field specified with the pattern specified, using the LIKE operator. The pattern should contain the wildcard, which is '%' (also for MS Access). FieldLikePredicate performs a LIKE compare using the case sensitivity setting of the database system the query is executed on: the SQL generated does not contain any collation information nor any case insensitive setting if the database is using case sensitive comparison operations by default (Oracle, some SqlServer installations). You can perform case insensitive compares however, if the database is case sensitive, by setting the CaseSensitiveCollation property to true prior to passing the predicate to a fetch method like FetchEntityCollection(). This will perform the UPPERCASE variant of the field with the pattern specified.Please note that if you've set CaseSensitiveCollaction to true, you've to specify your pattern in uppercase as well. SQL equivalent examples Operators Example Field LIKE '%bla' Field LIKE 'bla%' none. This example creates a predicate which compares Customer.CompanyName to the pattern "Solution%".
C# VB.NET

// C# filter.Add(new FieldLikePredicate(CustomerFields.CompanyName, null, "So lution%")); // Which is equal to: filter.Add(CustomerFields.CompanyName % "Solution%"); ' VB.NET filter.Add(New FieldLikePredicate(CustomerFields.CompanyName, Nothing, "Solution%")) ' Which is equal to: (VB.NET 2005) filter.Add(CustomerFields.CompanyName Mod "Solution%") Note, that the operator syntaxis is a little odd in VB.NET, due to the fact that there isn't an ability to add new operators to VB.NET/C#. Can be used for in-memory filtering Yes. When used in in-memory filters, the pattern can either be a normal LIKE statement pattern with '%' wildcards, or it can be a full regular expression. If the pattern is a regular expression, be sure to set the property PatternIsRegEx to true. See also the LLBLGen Pro reference manual on more detailed information about the properties of the FieldLikePredicate

Predicate classes for in-memory filtering only
Below you'll find the predicate classes which are only usable for in-memory filtering.

AggregateSetPredicate

Page 244

Description Predicate class which performs an aggregate function on a set of entities and returns true or false depending if that aggregated value matches a specified expression. The set of entities this predicate is applied on, for example a collection of OrderEntity instances, are the elements of a member property with the name specified which match the specified filter, for example 'Orders' in a set of CustomerEntity instances. Operators Example All ComparisonOperator operators: Equal, GreaterEqual, GreaterThan, LessEqual, LesserThan, NotEqual This example filters a set of customers to find all customers with at least 10 orders. customers is a collection of CustomerEntity instances.
C# VB.NET

// C# IPredicate filter = new AggregateSetPredicate( CustomerEntity.MemberNames.Orders, // member to apply the aggregate on AggregateSetFunction.Count, // aggregate function to apply. OrderFields.OrderId, // value producer. Can be a field as well. ComparisonOperator.GreaterThan, // comparison operator for the aggreg ateset 10, // value to compare with null); // additional filter to apply to the set. IEntityView2 filteredCustomers = new EntityView2<CustomerEntity>(custom ers, filter); ' VB.NET Dim filter As IPredicate = New AggregateSetPredicate( _ CustomerEntity.MemberNames.Orders, _ ' member to apply the aggregate on AggregateSetFunction.Count, _ ' aggregate function to apply. OrderFields.OrderId, _ ' value producer. Can be a field as well. ComparisonOperator.GreaterThan, _ ' comparison operator for the aggreg ateset 10, _ ' value to compare with Nothing) ' additional filter to apply to the set. Dim filteredCustomers As IEntityView2 = New EntityView2(Of CustomerEnti ty)(customers, filter)

DelegatePredicate and DelegatePredicate(Of T)

Page 245

Description Predicate class to filter in-memory entity collections based on a specified callback function. Use this predicate to filter entities based on logic which is best expressed in a normal .NET language, like C# or VB.NET. The generic variant accepts a Predicate(Of T) (.NET 2.0+) or Lambda expression (.NET 3.5) which is then used as a filter. Operators Example None This example filters a set of customers which have a CompanyName with length of 20 or higher. customers is a collection of CustomerEntity instances.
C# VB.NET

// C# // Create an entity view from customers based on the DelegatePredicate filter. // Uses an anonymous method, which is a .NET 2.0 feature. IEntityView2 customersWithLongName = new EntityView2<CustomerEntity>(cu stomers, new DelegatePredicate( delegate(IEntityCore toExamine) { return ((CustomerEntity)toExamine).CompanyName.Length > 20; })); ' VB.NET ' VB.NET doesn't support anonymous methods, so we need a separate metho d. Public Function CompareCustomerCompanyName(ByVal toExamine As CustomerE ntity) As Boolean Return toExamine.CompanyName.Length > 20 End Function

' .. somewhere in your code where you want to filter on the CompanyName length Dim customersWithLongName As IEntityView2 = New EntityView2(Of Customer Entity)(customers, _ new DelegatePredicate(AddressOf CompareCustomerCompanyName)) For a .NET 3.5 example, please see the EntityView2 filtering example .

MemberPredicate
Description Predicate class which allows in-memory filters to perform a predicate (filter) on one or more related entities. The entity this predicate is applied on has to have a member property with the name specified. Each element in that member (or the member itself, in case of a single instance) will be interpreted with the specified filter. The result of that interpretation is used together with the MemberOperator specified what the result of this predicate will be: true or false, in which case the entity this predicate is applied on is accepted (true) or not (false). Operators Example All MemberOperator operators: All, Any or Null (no data) This example shows in memory filtering of customers which filters all customers which have orders with a total > 5000. The order total is calculated by SUMming the orderdetail total values. The order detail total is (UnitPrice * quantity) - ((UnitPrice * quantity)* discount). customers is a collection of CustomerEntity instances.
C# VB.NET

Page 246

// C# IPredicate filter = new MemberPredicate( CustomerEntity.MemberNames.Orders, // the member to apply the sp ecified filter on. MemberOperator.Any, // operator for applying cont ained filter. new AggregateSetPredicate( // aggregate predicate to app ly on a member OrderEntity.MemberNames.OrderDetails, // member to apply the aggregate on AggregateSetFunction.Sum, // aggregate function to apply . ((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity)((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantit y) * OrderDetailsFields.Discount)), // value produc er expression. ComparisonOperator.GreaterThan, // comparison operator f or the aggregateset 5000.0M, // value to compare with null // filter which specifies the set to apply the aggregate on. ) ); IEntityView2 filteredCustomers = new EntityView2<CustomerEntity>(custom ers, filter); ' VB.NET Dim filter As IPredicate = New MemberPredicate( _ CustomerEntity.MemberNames.Orders, _ ' the member to apply the s pecified filter on. MemberOperator.Any, _ ' operator for applying cont ained filter. New AggregateSetPredicate( _ ' aggregate predicate to apply on a member OrderEntity.MemberNames.OrderDetails, _ ' member to appl y the aggregate on AggregateSetFunction.Sum, _ ' aggregate function to appl y. ((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity)_ ((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantit y) _ * OrderDetailsFields.Discount)), _ ' value produ cer expression. ComparisonOperator.GreaterThan, _ ' comparison operator f or the aggregateset 5000.0D, _ ' value to compare with Nothing _ ' filter which specifies the set to app ly the aggregate on. )) Dim filteredCustomers As new EntityView2(Of CustomerEntity)(customers, filter)
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 247

Generated code - Advanced filter usage, Adapter
Preface
This section builds upon the previous sections about filtering and shows you how to use the introduced concepts and classes in more advanced topics and situations.

Re-using filters
A predicate (expression) object is an object that is used to construct a complete WHERE clause. However, a predicate expression object can be re-used for filtering on, for example, other values for a particular field without recreating the predicate objects. Say we want to filter on a given order ID in our Typed View Invoices, by using this expression:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// [C#] IPredicateExpression invoicesFilter = new PredicateExpression(); IPredicate filterElement = (InvoicesFields.OrderID > 11000); invoicesFilter.Add(filterElement); ' [VB.NET] .NET 1.x Dim invoicesFilter As IPredicateExpression = New PredicateExpression() Dim filterElement As IPredicate = New FieldCompareValuePredicate( _ InvoicesFields.OrderID, Nothing, ComparisonOperator.GreaterThan, 11000) invoicesFilter.Add(filterElement) ' [VB.NET] .NET 2.0 Dim invoicesFilter As IPredicateExpression = New PredicateExpression() Dim filterElement As IPredicate = (InvoicesFields.OrderID > 11000) invoicesFilter.Add(filterElement) The value used to compare with is 11000. This is passed as a parameter value to the query for the parameter that is used to compare with Invoices.OrderID. However, if Invoices has to be re-filtered on another filter just after this call, the predicate can be re-used, the value just has to be altered. We can easily do that, because we have created a FieldCompareValue predicate. However the predicate is defined with the IPredicate interface, not with the actual classname. When you know you are going to re-use a predicate, define it with the classname:
C# VB.NET

// [C#] IPredicateExpression invoicesFilter = new PredicateExpression(); FieldCompareValuePredicate filterElement = (FieldCompareValuePredicate)(InvoicesFields.O rderID > 11000); invoicesFilter.Add(filterElement); ' [VB.NET] Dim invoicesFilter As IPredicateExpression = New PredicateExpression() Dim filterElement As FieldCompareValuePredicate = New FieldCompareValuePredicate( _ InvoicesFields.OrderID, Nothing, ComparisonOperator.GreaterThan, 11000) invoicesFilter.Add(filterElement)

Page 248

We can now set the Value for this predicate to another value to compare with:
C# VB.NET

// [C#] // ... invoiceFilter is used in code // Re-use the filter with a value of 10000. Set the value to 10000 to be able to do that filterElement.Value = 10000; ' [VB.NET] ' ... invoiceFilter is used in code ' Re-use the filter with a value of 10000. Set the value to 10000 to be able to do that filterElement.Value = 10000 And the next time you specify the invoiceFilter as a filter, the value to compare the field OrderID with will be 10000.

Negative predicates
Every predicate object can be negated, i.e. will be true when the predicate itself is not true. This is accomplished by specifying 'true' for negated when using the predicate class constructors. Negated is false by default. Each predicate type knows for itself where to place the NOT statement, so this is being taken care of by the predicate itself. You can negate a predicate or predicate expression by setting its Negate property to true. Using the native language filter construction methods, you can also negate a predicate by simply prefixing it with '!' (C#) or 'Not' (without the quotes), as shown in the following example, which filters on employeeid not equal to 2. (note that you also could have used a '!=' (C#) or '<>' (VB.NET) operator instead of the equation operator.)
C# VB.NET

// C# bucket.PredicateExpression.Add(!(OrderFields.EmployeeID==2)); ' VB.NET 2005 bucket.PredicateExpression.Add(Not (OrderFields.EmployeeID=2))

Filtering on entity type
In the situation where you want to fetch entities of a particular type (and all subtypes of that particular type), if the entity type to fetch is in an inheritance hierarchy (See Concepts - Entity inheritance and relational models ), it can be cumbersome to formulate the exact filter, after all the predicate classes don't supply you with a proper filter to filter on a type. To filter on a particular type, use the following general mechanism. Say you want to limit a fetch to only BoardMember entities, which is a subtype of Manager, which is a subtype of Employee. The given filter is build in the object filter in the following example:
C# VB.NET

// C# // add a filter which filters on boardmembers filter.PredicateExpression.Add(BoardMemberEntity.GetEntityTypeFilter()); 'VB.NET ' add a filter which filters on boardmembers filter.PredicateExpression.Add(BoardMemberEntity.GetEntityTypeFilter()) The method GetEntityTypeFilter() , which is available in all entities which are part of an inheritance

Page 249

hierarchy, produces an IPredicateExpression object which filters on the entity type you call the method on, so in our example on BoardMember. All subtypes of the type you're filtering on will also match the filter, as they're also of the type you're filtering on. (BoardMember is-a Manager is-a Employee).

Multi-entity filters
Sometimes you may want to filter on values in related entities. This is achieved by creating a RelationPredicateBucket and adding the required Relations. The RelationPredicateBucket is also used to hold additional filters; these filters are added to the PredicateExpression Property of the RelationPrediacteBucket. For example, suppose you wanted to retrieve all customers who bought a product from any supplier in France. This asks for a filter on Country, but Country is not part of the customer entity, it is part of the Supplier entity. Here's how to achieve this in code:
C# VB.NET

// [C#] EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); RelationPredicateBucket bucket = new RelationPredicateBucket(); bucket.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerID); bucket.Relations.Add(OrderEntity.Relations.OrderDetailsEntityUsingOrderID); bucket.Relations.Add(OrderDetailsEntity.Relations.ProductEntityUsingProductID); bucket.Relations.Add(ProductEntity.Relations.SupplierEntityUsingSupplierID); bucket.PredicateExpression.Add(SupplierFields.Country == "France"); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntityCollection(customers, bucket); ' [VB.NET] Dim customers As New EntityCollection(New CustomerEntityFactory()) Dim bucket As New RelationPredicateBucket() bucket.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerID) bucket.Relations.Add(OrderEntity.Relations.OrderDetailsEntityUsingOrderID) bucket.Relations.Add(OrderDetailsEntity.Relations.ProductEntityUsingProductID) bucket.Relations.Add(ProductEntity.Relations.SupplierEntityUsingSupplierID) bucket.PredicateExpression.Add(New FieldCompareValuePredicate(SupplierFields.Country, No thing, ComparisonOperator.Equal, "France")) Dim adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, bucket) For VB.NET users using .NET 2.0, you could also write the predicate in this example as: bucket.PredicateExpression.Add(SupplierFields.Country = "France") First, the EntityCollection object is created by passing a new CustomerEntityFactory object to the constructor of a new EntityCollection. This object will store the entities that are retrieved from the persistent storage. When the collection is used to fetch entities the CustomerEntityFactory object will be used thus creating CustomerEntity instances. Next, a new RelationPredicateBucket is created to hold the RelationCollection and the PredicateExpression objects we want to use in our filter. We now add each relation in the correct order. Start with the target entity, in this case Customer, and work your way down to the entity you want to filter on. In this case the SupplierEntity. Each entity, on both sides of 'Relations' is included in the complete scope of the query, thus ProductEntity.Relations.SupplierEntityUsingSupplierID will include both Product and Supplier and thus you can filter on fields in either or both of these entities. After the relations are added to the RelationCollection of our bucket we add the search filter to the PredicateExpression property of our bucket. We do this by adding one predicate, a FieldCompareValuePredicate, which compares Supplier.Country with the value "France". Now all the objects are ready to be used and are passed as parameters to the FetchEntityCollection method of the created DataAccessAdapter object. This will retrieve all Customer objects meeting the requirements of the filter we just defined. There is no limit to the number of relations you can add to the RelationCollection of our bucket, however keep in mind that each added relation will result in an INNER JOIN statement (or LEFT/RIGHT JOIN when

Page 250

ObeyWeakRelations is set to true, see Weak relations below, if you've specified a joinhint when adding the relation to the RelationCollection by using a different Add() overload) and with a lot of relations defined, this can hurt performance. In the example, all entities in the relations are added once. If you want to filter on an entity twice, or if you use an entity twice in two, different relations, you have to specify aliasses for the entities in the relations. See Advanced filtering below for more information. Also see the section Weak relations for more information about JoinHints.

Custom filters for EntityRelations
In the section above, Multi-entity filters, it was described how relations could be specified to construct a JOIN path. The JOIN clauses themselves are determined from the relation objects, thus FK-PK compares which result in the ON clause. Sometimes it is important to specify additional predicates in this ON clause. You can do this by specifying an IPredicateExpression instance for the CustomFilter property of the EntityRelation you add to a RelationCollection. In the example below we add a custom predicate to the EntityRelation object of the relation Customer-Order and which filters on Order.ShipCountry="Mexico". It uses the example of Multi-entity filters.
C# VB.NET

// C# IPredicateExpression customFilter = new PredicateExpression(); customFilter.Add(OrderFields.ShipCountry == "Mexico")); // ... relationsToUse.Add(CustomerEntity.Relations.OrderEntityUsingCustomerID).CustomFilter = c ustomFilter; // ... ' VB.NET Dim customFilter As IPredicateExpression = New PredicateExpression() customFilter.Add(New FieldCompareValueValue(OrderFields.ShipCountry, Nothing, Comparison Operator.Equal, "Mexico")) ' ... relationsToUse.Add(CustomerEntity.Relations.OrderEntityUsingCustomerID).CustomFilter = c ustomFilter ' ... Please pay special attention to the flag EntityRelation.CustomFilterReplacesOnClause. If this flag is set to true, it will make the join construction logic to use a specified CustomFilter as the ON clause instead of appending it with AND to the field relation clause.

Weak relations
The RelationCollection's default behavior is to use INNER JOINs to join the related entities together. LLBLGen Pro solves this by obeying weak relations. A weak relation is a relation that is optional. For example the entity Customer, which has a foreign key VisitingAddressID which points to the primary key AddressID of the Address entity. If VisitingAddressID is nullable, not every Customer has to have a related Address entity using the relation Customer.VisitingAddressID - Address.AddressID. If you want to read all Customer entities and you want to filter on Address.City using the Customer.VisitingAddressID - Address.AddressID relation, by default the query will do Customer INNER JOIN Address ON Customer.VisitingAddressID = Address.AddressID. This leaves out all Customer entities not having a visiting address set. If you want to filter on all Customer entities who don't have a visiting address set, you're out of luck, you need a LEFT JOIN for that. The relation Customer.VisitingAddressID - Address.AddressID is considered weak : it is an optional relation, not every Customer entity has to have an address entity (as the field is nullable). Would VisitingAddressID not be nullable, the relation would have been considered strong . Strong relations will result in the same resultset if you use Customer LEFT JOIN Addres or if you use Customer INNER JOIN Address. A weak relation however will result in a different resultset if you use Customer LEFT JOIN Address than if you use Customer INNER JOIN Address. To make it easier for you to automatically set the right join types for the relations, you can set the flag RelationCollection.ObeyWeakRelations to true or false. Default, the flag is false, every relation is treated as strong and all join types are INNER JOINs. If ObeyWeakRelations is set to true, the RelationCollection's

Page 251

routine to produce SQL from the relations will check if a relation is weak and if so, will make sure the entity which can have an optional other related entity (in our example, 'Customer') is always included in the resultset for each row. This means that for the weak relation Customer - Address, this will result in a Customer LEFT JOIN Address. The other side of the relation, Address - Customer is also weak, as not every Address entity has to have a related Customer entity. In this case, the Address entity is the entity which can have optional related entities, so this relation will result in: Address LEFT JOIN Customer. M:N relations are always build with INNER JOINs. It can be that a strong relation still is resulting in a LEFT JOIN. This is the case when the entity to add to the JOIN list is joined with the already joined entities by an entity which is added through a weak relation. Example: Order 1:n OrderDetails m:1 Product. Order - OrderDetails is weak, as Order doesn't have to have an OrderDetail row per se. OrderDetails - Product is strong. When ObeyWeakRelations is set to true for a typed list with these three entities (and which are using these relations), the entity Product is joined using LEFT JOIN, even though the relation is strong. This is because the OrderDetails entity is joined using LEFT JOIN, and will probably contain NULL values because of that. To keep these rows in the resultset, Product has to be joined as well. If you don't want NULL values for OrderDetails, you should have specified ObeyWeakRelations as false, which would have resulted in INNER JOINs and which would result in the rows you were interested in. So ObeyWeakRelations resolves you from the burden to figure out which join type to use for each relation to get the resultset you want. Some developers however need finegrained control as they want INNER JOINs on some relations and LEFT/RIGHT joins on others. You can specify these join types explicitly when you add the relation to the RelationCollection, using an overload of the RelationCollection.Add() method. You specify a JoinType enum value to signal how the start entity has to be joined to the end entity of the relation. For example:
C# VB.NET

// C# IRelationPredicateBucket bucket = new RelationPredicateBucket(); bucket.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerID, JoinHint.Left) ; ' VB.NET Dim bucket As IRelationPredicateBucket = new RelationPredicateBucket(); bucket.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerID, JoinHint.Left) The example above illustrates the addition of the relation Customer.CustomerID - Order.CustomerID and it should be joined with a LEFT JOIN: Customer LEFT JOIN Order. By default the JoinHint for a relation is JoinHint.None. Any JoinHint other than JoinHint.None will overrule the value of ObeyWeakRelations for that particular relation.

Advanced filtering
The predicates and relations discussed up till now can do a fair amount of filtering for you. However sometimes you need more advanced filtering, for example when an entity has to be joined multiple times to the join list, using aliasses and you want to filter with one predicate on the fields of one alias and in another predicate on another alias. An example would be: get all Customers who have a visiting address in Amsterdam and a billing address in Rotterdam. Customer has two relations with Address: Customer.VisitingAddressID - Address.AddressID and Customer.BillingAddressID - Address.AddressID. Simply adding the relation CustomerEntity.Relations.AddressUsingVisitingAddressID to the RelationCollection will work, but when you add the relation CustomerEntity.Relations.AddressUsingBillingAddressID, you have two times the Address entity in the join list, how are you going to target one of them in a predicate? The solution is to alias the entities in the relation added to the RelationCollection, and also to use the same alias in a predicate. If you omit an alias, it is considered not aliased and if you have aliased an entity in an earlier added relation to the same RelationCollection, it will be considered a different entity in the join list. So aliassing Customer to "C" in the first relation and in the second relation you do not specify an alias for Customer, you'll get 2 times a Customer entity in the join list. So use aliassing with care.

Page 252

Our example of Customer and the two Address entities with the two City predicates will result in the following code. Notice the alias usage in the predicates as well.
C# VB.NET

// C# IRelationPredicateBucket bucket = new RelationPredicateBucket(); bucket.Relations.Add(CustomerEntity.Relations.AddressEntityUsingVisitingAddressID, "Visi tingAddress"); bucket.Relations.Add(CustomerEntity.Relations.AddressEntityUsingBillingAddressID, "Billi ngAddress"); bucket.PredicateExpression.Add((AddressFields.City.SetObjectAlias("VisitingAddress")=="A msterdam") & (AddressFields.City.SetObjectAlias("BillingAddress")=="Rotterdam")); EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntityCollection(customers, bucket); ' VB.NET Dim bucket As IRelationPredicateBucket = new RelationPredicateBucket() bucket.Relations.Add(CustomerEntity.Relations.AddressEntityUsingVisitingAddressID, "Visi tingAddress") bucket.Relations.Add(CustomerEntity.Relations.AddressEntityUsingBillingAddressID, "Billi ngAddress") bucket.PredicateExpression.Add(New FieldCompareValuePredicate( _ AddressFields.City, Nothing, _ ComparisonOperator.Equal, _ "Amsterdam", _ "VisitingAddress")) bucket.PredicateExpression.AddWithAnd(New FieldCompareValuePredicte( _ AddressFields.City, Nothing, _ ComparisonOperator.Equal, _ "Rotterdam", _ "BillingAddress")) Dim customers As New EntityCollection(New CustomerEntityFactory()) Dim adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, bucket) ' which is equal to (VB.NET 2005) Dim bucket As IRelationPredicateBucket = new RelationPredicateBucket() bucket.Relations.Add(CustomerEntity.Relations.AddressEntityUsingVisitingAddressID, "Visi tingAddress") bucket.Relations.Add(CustomerEntity.Relations.AddressEntityUsingBillingAddressID, "Billi ngAddress") bucket.PredicateExpression.Add( _ ((AddressFields.City.SetObjectAlias("VisitingAddress")="Amsterdam") And (AddressFields.City.SetObjectAlias("BillingAddress")="Rotterdam")) Dim customers As New EntityCollection(New CustomerEntityFactory()) Dim adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, bucket)

Note : The aliasses specified have to be valid aliasses in SQL, which means the aliasses should not contain spaces for example.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 253

Page 254

Generated code - Sorting, Adapter
Preface
This section discusses briefly the serverside sorting capabilities of LLBLGen Pro. The sorting discussed below is executed as an ORDER BY clause in the generated query. If you want to sort the data in an already fetched collection class, you should use the Sort() method available on an entity collection. Please consult the LLBLGen Pro reference manual for details for Sort().

Upgrading from v1.0.200x.y: no SortClauseFactory
In previous versions of LLBLGen Pro, v1.0.2005.1 and earlier, by default a class called SortClauseFactory was generated. This class contained for each field in each entity a convenient method to produce a SortClause instance. In larger projects however this lead to a very big class which was unusable in VS.NET due to the high number of overloads of a single method. In v2.0 of LLBLGen Pro this class is no longer generated by default and is discouraged to be used in your code. You can still generate this class however, simply enable to SortClauseFactory generation task in the run queue of your preset of choice (See: Designer - Generating code ). This documentation will avoid the usage of the SortClauseFactory class, unless stated otherwise. If you need information about the SortClauseFactory class, please consult the documentation of v1.0.2005.1, still available at our website.

Sorting
Sorting is the ability to order data in one or more fields ascending (A -> Z) or descending (Z -< A). You do this by constructing a SortExpression with one or more SortClauses. SortClauses are simple definitions which contain information about which field to sort and in which direction (ascending/descending). In the previous section, Advanced Filtering, aliassing entities has been introduced, and you can refer to a specific field in a specific aliased entity by specifying the right alias with the SortClause constructor. You can also use the EntityField's SetObjectAlias method. When you're using the native language filter construction method to formulate filters, it's convenient to also use this for SortExpressions and sortclauses. Below is an example which creates a SortExpression to sort on Customer.Country Ascending and Customer.CompanyName descending. Both methods are shown (regular and native language).
C# VB.NET

// C# SortExpression sorter = new SortExpression(CustomerFields.Country | SortOperator.Ascendi ng) & (CustomerFields.CompanyName | SortOperator.Descending); ' VB.NET Dim sorter As New SortExpression() sorter.Add(New SortClause(CustomerFields.Country, Nothing, SortOperator.Ascending)) sorter.Add(New SortClause(CustomerFields.CompanyName, Nothing, SortOperator.Descending)) ' which is equal to: (VB.NET 2005) Dim sorter As New SortExpression(CustomerFields.Country Or SortOperator.Ascending) And _ (CustomerFields.CompanyName Or SortOperator.Descending)

Page 255

Note : If you specify a sort clause or a set of sortclauses and a RelationCollection (which is almost always the case with a typed list) while you also specify that duplicates are not allowed, be sure the sort clauses are referring to fields in the resultset, otherwise the database can't obey the sort rule and will throw an exception, since all fields mentioned in an ORDER BY clause (which is the result of a sort clause) have to be in the resultset when a DISTINCT statement (the result of the specification that no duplicate rows have to be retrieved) is included. When you want to sort on a field which has an aggregate function or an expression applied to it, be sure to specify the aggregate function or expression object to the field in the SortClause as well, with the same alias.

Case-insensitive sorting
On case-sensitive databases (default Oracle installations, Firebird etc.) it can be you want to sort alphanumeric data case-insensitive. To achieve that, you should set the SortClause object's property CaseSensitiveCollation to true , identical to the FieldLikePredicate system for case-insensitive filtering. Setting this property to true will make the query generator emit UPPER() around the field, thus UPPER(fieldname), or equivalent function for UPPER() on the particular database. Example, which sorts case insensitive on companyname:
C# VB.NET

// C# SortExpression sorter = new SortExpression(); sorter.Add(CustomerFields.Country | SortOperator.Ascending); sorter.Add(CustomerFields.CompanyName | SortOperator.Descending); sorter[1].CaseSensitiveCollation=true; pre>' VB.NET Dim sorter As New SortExpression() sorter.Add(New SortClause(CustomerFields.Country, Nothing, SortOperator.Ascending)) sorter.Add(new SortClause(CustomerFieldIndex.CompanyName, Nothing, SortOperator.Descending)) sorter(1).CaseSensitiveCollation=True
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 256

Generated code - Prefetch paths, Adapter
Preface
Adapter doesn't support load-on-demand, also known as 'lazy loading' like SelfServicing does. The reason for this is that Adapter is often used in a distributed scenario, so there is no connection with the server when related objects need to be read, this is an action which can then only be done on the server. This scenario requires that you pre-fetch all entities required on the client, before sending the entity (or entities) requested to the client. In the occasion where the client requested a graph of objects from the server, this could lead to a lot of queries. Say you want a collection of Order entities and also want to fetch their related Customer entities. Using normal code this would require for 50 order entities 51 queries: 1 for the Order entities and 50 for each Customer. This is solved by Prefetch Paths, which allow you to specify which objects to fetch together with the actual objects to fetch, using only one query per node in the path (here: 2 queries). This section describes how to use Prefetch Paths and how they work internally.

Note : If your database is case insensitive (uses case insensitive collation), and you have foreign key values which only differ in casing from the PK values, it can be that the prefetch path merging (merge child (e.g. order) with parent (e.g. customer) doesn't find matching parent-child relations, because the FK differs from the PK value (as the routine uses case sensitive compares). To fix this, set the static / shared property EntityFieldCore.CaseSensitiveStringHashCodes to false (default is true). You can also do this through a config file setting by specifying caseSensitiveStringHashCodes with 'false' in the .config file of your application. See for more information about this config file setting Application Configuration through .config files .

Using Prefetch Paths, the basics
In the Preface paragraph, the example of an Order selection and their related Customer objects was mentioned. The most efficient way to fetch all that data would be: 2 queries, one for all the Order entities and one for all the Customer entities. By specifying a Prefetch Path together with the fetch action for the Order entities, the logic will fetch these related entities defined by the Prefetch Path as efficient as possible and will merge the two resultsets to the result you're looking for. Adapter uses the PrefetchPath2 class for Prefetch Path objects. PrefetchPath2 objects are created for a single entity type, specified by the specified entity enumeration. This ensures that PrefetchPathElement2 objects added to the PrefetchPath2 object actually define a valid node for the entity the path belongs to. PrefetchPathElement2 objects, the nodes added to the PrefetchPath2 objects which define the entities to fetch, are created using static (shared) properties of the parent entity. The properties are named after the fields mapped on the relations they define the fetch action for. Example: The Orders collection in a Customer entity can be fetched using a Prefetch Path by using the static (shared) property PrefetchPathOrders of the CustomerEntity class to produce the PrefetchPathElement2 for 'Orders'. This way, it is easy to see which PrefetchPathElement2 producing property you have to use to produce the right PrefetchPathElement2. The example of Order entities and their related Customer entities fetched with Prefetch Paths looks like this:
C# VB.NET

Page 257

// C# EntityCollection orders = new EntityCollection(new OrderEntityFactory()); IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.OrderEntity); prefetchPath.Add(OrderEntity.PrefetchPathCustomer); IRelationPredicateBucket filter = new RelationPredicateBucket(); filter.PredicateExpression.Add(OrderFields.EmployeeId == 2)); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntityCollection(orders, filter, prefetchPath); ' VB.NET Dim orders As New EntityCollection(New OrderEntityFactory()) Dim prefetchPath As IPrefetchPath2 = New PrefetchPath2(CType(EntityType.OrderEntity, Int eger)) prefetchPath.Add(OrderEntity.PrefetchPathCustomer) Dim filter As IRelationPredicateBucket = New RelationPredicateBucket() filter.PredicateExpression.Add( New FieldCompareValuePredicate( _ OrderFields.EmployeeId, Nothing, ComparisonOperator.Equal, 2)) Dim adapter As New DataAccessAdapter() adapter.FetchEntityCollection(orders, filter, prefetchPath) VB.NET users who use .NET 2.0, could also write the filter expression above as: filter.PredicateExpression.Add( OrderFields.EmployeeId = 2) This fetch action will fetch all Order entities accepted by the Employee with Id 2, and will also fetch for each of these Order entities the related Customer entity. This will result in just two queries: one for the Order entities with the filter on EmployeeId = 2 and one for the Customer entities with a subquery filter using the Order entity query. The fetch logic will then merge these two resultsets using efficient hashtables in a single pass algorithm. The example above is a rather simple graph, just two nodes. LLBLGen Pro's Prefetch Path functionality is capable of handling much more complex graphs and offers options to tweak the fetch actions per PrefetchPathElement2 object to your liking. To illustrate that the graph doesn't have to be linear, we'll fetch a more complex graph: a set of Customer entities, all their related Order entities, all the Order's Order Detail entities and the Customer entities' Address entities. The example illustrates how to use sublevels in the graph: use the SubPath property of the PrefetchPathElement object used to build graph nodes with.
C# VB.NET

// C# EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.CustomerEntity); prefetchPath.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPath OrderDetails); prefetchPath.Add(CustomerEntity.PrefetchPathVisitingAddress); IRelationPredicateBucket filter = new RelationPredicateBucket(); filter.PredicateExpression.Add(CustomerFields.Country == "Germany")); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntityCollection(customers, filter, prefetchPath); ' VB.NET Dim customers As New EntityCollection(new CustomerEntityFactory()) Dim prefetchPath As IPrefetchPath2 = New PrefetchPath2(CInt(EntityType.CustomerEntity)) prefetchPath.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPath OrderDetails) prefetchPath.Add(CustomerEntity.PrefetchPathVisitingAddress) Dim filter As IRelationPredicateBucket = New RelationPredicateBucket() filter.PredicateExpression.Add( New FieldCompareValuePredicate( _ CustomerFields.Country, Nothing, ComparisonOperator.Equal, "Germany")) Dim adapter As New DataAccessAdapter()

Page 258

adapter.FetchEntityCollection(customers, filter, prefetchPath) The example above, fetches in 4 queries (one for the Customer entities, one for the Order entities, one for the Order Detail entities and one for the Address entities) all objects required for this particular graph. As the end result, you'll get all Customer entities from Germany, which have their Orders collections filled with their related Order entities, all Order entities have their related Order Detail entities loaded and each Customer entity also has their visiting address entity loaded. The graph is also non-linear: it has two branches from Customer. You can define more if you want, there is no limit set on the number of PrefetchPathElement2 objects in a Prefetch Path, however consider that each level in a graph is a subquery, so if you have for example a Prefetch Path with a subpath of a length of 7 PrefetchPathElement2 objects nested into eachother, you'll get 7 subqueries nested into each other.

Optimizing Prefetch Paths
The LLBLGen Pro runtime libraries create for a Prefetch Paths per node a sub-query, to be able to filter child nodes on the query results of the parent nodes. Say, you want to fetch all customers from "France" and their order objects. This would look something like the following:
C# VB.NET

// C# IPrefetchPath2 path = new PrefetchPath2((int)EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders); ' VB.NET Dim path As IPrefetchPath2 = New PrefetchPath2(CInt(EntityType.CustomerEntity)) path.Add(CustomerEntity.PrefetchPathOrders); When the customers are fetched with the filter and the path, using DataAccessAdapter.FetchEntityCollection, it will produce SQL like the following: (pseudo) Query to fetch the customers: SELECT CustomerID, CompanyName, ... FROM Customers WHERE Country = @country Query to fetch the orders: SELECT OrderID, CustomerID, OrderDate, ... FROM Orders WHERE CustomerID IN ( SELECT CustomerID FROM Customers WHERE Country = @country ) Tests will show that for small quantities of Customers, say 10, this query is less efficient than this query: (pseudo) SELECT OrderID, CustomerID, OrderDate, ... FROM Orders WHERE CustomerID IN ( @customer1, @customer2, ... , @customer10) LLBLGen Pro offers you to tweak this query generation by specifying a threshold, DataAccessAdapter.ParameterisedPrefetchPathThreshold , what the runtime libraries should do: produce a subquery with a select or a subquery with a range of values. The value set for ParameterisedPrefetchPathThreshold specifies at which amount of the parent entities (in our example, the customer entities) it has to switch to a subquery with a select. ParameterisedPrefetchPathThreshold is set to 50 by default. Tests showed a threshold of 200 is still efficient, but to be sure it works on every database,

Page 259

the value is set to 50. Please note that for each subnode fetch, it's parent is the one which is examined for this theshold, so it's not only the root of the complete graph which is optimized with this setting. In the example in the previous paragraph, Customer - Orders - OrderDetails was fetched, and for OrderDetails the node for Orders is the parent node and the entities fetched for orders are the parent entities for the orderdetails entities. You're adviced not to set the ParameterisedPrefetchPathThreshold to a value larger than 300 unless you've tested a larger value in practise and it made queries run faster. This to prevent you're using slower queries than necessary. You can set the value per call, and it affects all prefetch path related fetches done with the same DataAccessAdapter instance.

Polymorphic Prefetch Paths
Because inheritance would be quite useless if polymorphism wouldn't be possible, LLBLGen Pro supports polymorphism in prefetch path technology as well. Polymorphic prefetch paths work the same as normal prefetch paths, only now they work on subtypes as well. Say you have the following hierarchy: Employee Manager - BoardMember and BoardMember has a relation with CompanyCar (which can be a hierarchy of its own). If you then fetch all employees (which can be of type BoardMember) and you want to load for each BoardMember loaded in that fetch also its related CompanyCar, you define the prefetch path as you'd expect:
C# VB.NET

// C# IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.EmployeeEntity); EntityCollection employees = new EntityCollection(new EmployeeEntityFactory()); // specify the actual path: BoardMember - CompanyCar prefetchPath.Add(BoardMemberEntity.PrefetchPathCompanyCar); // .. fetch code ' VB.NET Dim employees As New EntityCollection(New EmployeeEntityFactory()) Dim prefetchPath As IPrefetchPath2 = New PrefetchPath2(CIntEntityType.EmployeeEntity)) ' specify the actual path: BoardMember - CompanyCar prefetchPath.Add(BoardMemberEntity.PrefetchPathCompanyCar) ' .. fetch code LLBLGen Pro will then only load those CompanyCar entities which are referenced by a BoardMember entity, and will merge them at runtime with the BoardMember entities loaded in the fetch.

Multi-branched Prefetch Paths
Prefetch Paths can also be multi-branched. Multi-branched means that two or more subpaths are defined from the same path node. As Prefetch Paths are defined per-line this can be a bit of a problem. The example below defines two subpaths from the OrderEntity node and it illustrates how to create this multi-branched Prefetch Path definition:
C# VB.NET

PrefetchPath2 path = new PrefetchPath2((int)EntityType.CustomerEntity); IPrefetchPathElement2 orderElement = path.Add(CustomerEntity.PrefetchPathOrders); orderElement.SubPath.Add(OrderEntity.PrefetchPathOrderDetails); // branch 1 orderElement.SubPath.Add(OrderEntity.PrefetchPathEmployee); // branch 2 Dim path As New PrefetchPath2(CInt(EntityType.CustomerEntity)) Dim orderElement As IPrefetchPathElement2 = path.Add(CustomerEntity.PrefetchPathOrders) orderElement.SubPath.Add(OrderEntity.PrefetchPathOrderDetails) ' branch 1 orderElement.SubPath.Add(OrderEntity.PrefetchPathEmployee) ' branch 2

Advanced Prefetch Paths
Page 260

The previous examples showed some of the power of the Prefetch Path functionality, but sometimes you need some extra features, like filtering on the related entities, sorting of the related entities fetched and limiting the number of related entities fetched. These features are available in the PrefetchPathElement2 object, and also accessable through overloads of the PrefetchPath2.Add() method. Let's say you want all employees and the last order they processed. The following example illustrates this, using Prefetch Paths. It sorts the related entities, and limits the output to just 1.
C# VB.NET

// C# // For .NET 1.x, use the non-generic version of EntityCollection EntityCollection<EmployeeEntity> employees = new EntityCollection<EmployeeEntity>(); IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.EmployeeEntity); ISortExpression sorter = new SortExpression(); sorter.Add(OrderFields.OrderDate | SortOperator.Descending); prefetchPath.Add(EmployeeEntity.PrefetchPathOrders, 1, null, null, sorter); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntityCollection(employees, null, prefetchPath); ' VB.NET Dim employees As New EntityCollection(New EmployeeEntityFactory()) Dim prefetchPath As IPrefetchPath2 = New PrefetchPath2(CInt(EntityType.EmployeeEntity)) Dim sorter As ISortExpression = New SortExpression() sorter.Add(New SortClause(OrderFields.OrderDate, Nothing, SortOperator.Descending)) prefetchPath.Add(EmployeeEntity.PrefetchPathOrders, 1, Nothing, Nothing, sorter) Dim adapter As New DataAccessAdapter() adapter.FetchEntityCollection(employees, Nothing, prefetchPath) Besides a sort expression, you can specify a RelationCollection together with a PredicateExpression when you add a PrefetchPathElement2 to the PrefetchPath2 object to ensure that the fetched related entities are the ones you need. For example, the following code snippet illustrates the prefetch path of Customer - Orders, but also filters the customers on its related orders. As this filter belongs to the customers fetch, it shouldn't be added to the Orders node, but should be passed to the FetchEntityCollection() method call.
C# VB.NET

// C# EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(); RelationPredicateBucket customerFilter = new RelationPredicateBucket(); // fetch all customers which have orders shipped to brazil. customerFilter.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerId); customerFilter.PredicateExpression.Add(OrderFields.ShipCountry=="Brazil"); // load for all customers fetched their orders. PrefetchPath2 path = new PrefetchPath2(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders); // perform the fetch using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(customers, customerFilter, path); } ' VB.NET Dim customers As New EntityCollection(Of CustomerEntity)(); Dim customerFilter As New RelationPredicateBucket() ' fetch all customers which have orders shipped to brazil. customerFilter.Relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerId) customerFilter.PredicateExpression.Add(OrderFields.ShipCountry="Brazil")

Page 261

' load for all customers fetched their orders. Dim path As New PrefetchPath2(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders) ' Perform the fetch Using adapter As new DataAccessAdapter() adapter.FetchEntityCollection(customers, customerFilter, path) End Using M:N related entities Prefetch Paths can also be used to fetch m:n related entities, they work the same as other related entities. There is one caveat: the intermediate entities are not fetched with an m:n relation Prefetch Path. For example, if you fetch a set of Customer entities and also their m:n related Employee entities, the intermediate entity, Order, is not fetched. If you specify, via another PrefetchPathElement2, to fetch the Order entities as well, and via a SubPath also their related Employee entities, these Employee entities are not the same objects as located in the Employees collection of every Customer entity you fetched. Derived entity classes and Prefetch Paths If you create or generate derived entity classes for your Adapter entities, you also have derived new entity factory classes from the entity factory classes for the entity classes. The PrefetchPathElement2 objects produced by the static (shared) properties however will contain an entity factory for the original generated entity classes. To make sure the fetched related entities are created using the proper entity factory, specify the entity factory to use when you add the PrefetchPathElement2 to the PrefetchPath2 object, using the proper PrefetchPath2.Add() overload. You can also set the entity factory to use by setting the PrefetchPathElement2.EntityFactoryToUse property.

Single entity fetches and Prefetch Paths
Prefetch Paths can also be used when you fetch a single entity, either by fetching the entity using a primary key or via a unique constraint fetch. Below are two examples, one using the primary key and one using a unique constraint. Both fetch the m:n related Employees for the particular Customer entity instantiated. Primary key fetch
C# VB.NET

// C# IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.CustomerEntity); prefetchPath.Add(CustomerEntity.PrefetchPathEmployees); CustomerEntity customer = new CustomerEntity("BLONP"); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntity(customer, prefetchPath); ' VB.NET Dim prefetchPath As IPrefetchPath2 = New PrefetchPath2(CType(EntityType.CustomerEntity, Integer)) prefetchPath.Add(CustomerEntity.PrefetchPathEmployees) Dim customer As New CustomerEntity("BLONP") Dim adapter As New DataAccessAdapter() adapter.FetchEntity(customer, prefetchPath) Unique constraint fetch
C# VB.NET

// C# CustomerEntity customer = new CustomerEntity(); customer.CompanyName = "Blauer See Delikatessen"; IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.CustomerEntity); prefetchPath.Add(CustomerEntity.PrefetchPathEmployees); DataAccessAdapter adapter = new DataAccessAdapter();

Page 262

adapter.FetchEntityUsingUniqueConstraint(customer, customer.ConstructFilterForUCCompanyN ame(), prefetchPath); ' VB.NET Dim customer As New CustomerEntity() customer.CompanyName = "Blauer See Delikatessen" Dim prefetchPath As IPrefetchPath2 = New PrefetchPath2(CType(EntityType.CustomerEntity, Integer)) prefetchPath.Add(CustomerEntity.PrefetchPathEmployees) Dim adapter As New DataAccessAdapter() adapter.FetchEntityUsingUniqueConstraint(customer, customer.ConstructFilterForUCCompanyN ame(), prefetchPath)

Prefetch Paths and Paging
Starting with version 1.0.2005.1, LLBLGen Pro supports paging functionality in combination of Prefetch Paths. Due to the complex nature of the prefetch path queries, especially with per-node filters, sort expressions etc., paging wasn't possible, though due to the DataAccessAdapter.ParameterisedPrefetchPathThreshold threshold setting (See earlier in this section: Optimizing Prefetch Paths ), paging can be made possible. If you want to utilize paging in combination of prefetch paths, be sure to set DataAccessAdapter.ParameterisedPrefetchPathThreshold to a value larger than the page size you want to use. You can use paging in combination of prefetch path with a page size larger than DataAccessAdapter.ParameterisedPrefetchPathThreshold but it will be less efficient. To use paging in combination of prefetch paths, use one of the overloads you'd normally use for fetching data using a prefetch path, which accept a page size and page number as well.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 263

Generated code - Excluding / Including fields for fetches, Adapter
Preface
Sometimes entities have fields which can be big in size, for example an Employee entity which has a Photo field which can contain an image, or an Order entity which can contain the Proposal Word document. In other situations, it can be that you only want to fetch a subset of the fields of an entity. It then would be great if you could exclude one or more fields for the fetch of the entity or collection of entities. LLBLGen Pro allows you to do that: in single entity fetches, entity collection fetches and prefetch path nodes. To specify which fields to exclude, you have two options: 1. Specify the fields to exclude 2. Specify the fields to fetch (i.e. include) which means the rest is excluded. To do this, you use the ExcludeIncludeFieldsList class and use its ExcludeContainedFields to specify what LLBLGen Pro should do with the fields contained in the list. By default, ExcludeContainedFields is set to true, which means that all fields added to the instance of ExcludeIncludeFieldsList should be seen as the fields to exclude (thus option 1. in the list above). When ExcludeContainedFields is set to false, the fields added to the instance of ExcludeIncludeFieldsList should be seen as the fields to fetch (so the rest is excluded, option 2. in the list above). This gives great flexibility to specify the excluded fields. Sometimes the list of fields to exclude is larger than the list of fields to fetch, so in that case use option 2. If the list of fields is smaller than the list of fields to fetch, use option 1. To fetch an entity or collection of entities with an ExcludeIncludeFieldsList, use one of the overloads for the various fetch methods which accept an ExcludeIncludeFieldsList object. After you've fetched the entities, for example a set of employees without their Photo or Notes field, it can be you want to fetch that excluded data into the entity object you have in memory. For example you have a grid of Employee entities and when a given Employee entity in that grid is selected, the detail view of the entity requires the Photo and the Notes fields, which weren't fetched for the grid. LLBLGen Pro offers you to fetch these excluded fields into an entity which was fetched without these fields. The examples below show you how to specify the fields to exclude and also how to fetch the data back into the entities. In the .NET 2.0+ build of the ORMSupportClasses runtime library, two utility classes are present: ExcludeFieldsList and IncludeFieldsList. Both classes derive from ExcludeIncludeFieldsList, and set the excludeContainedFields flag through the constructor. They don't contain any logic, though make it easier for people reading the code if the ExcludeIncludeFieldsList instance contains fields to exclude or fields to include.

Fetching excluded fields in batches
The following examples first fetch the entities without some excluded fields, and then show how to fetch the data back into the existing entities. LLBLGen Pro will do this in batches. So if you have 10,000 entities in memory and you want the data you excluded when fetching these entities fetched into these entities, LLBLGen Pro will break the set of 10,000 up into smaller sets and fetch the excluded fields per batch. The batch size is controlled using the ParameterizedPrefetchPathThreshold which is also used for parameterized prefetch paths. This same constant is used as the logic to fetch the excluded fields follows the same pattern. The initial batch size is 5 * the value of that threshold, where the number of primary fields * the number of entities to process is used to determine the number of batches to fetch, as the batch size set controls the number of parameters emitted in the query. Example: if the entity has a primary key of two fields, and you specify that ParameterizedPrefetchPathThreshold is 10, the initial batch size is 50, and because the number of primary fields is 2, 25 entities per batch are processed, resulting in 2 batches if you have 50 entities to process.

Page 264

To fetch excluded fields into existing entities, use the method DataAccessAdapter.FetchExcludedFields. Please see the LLBLGen Pro reference manual for overloads on the various fetch methods for entities (single entities and collections) to see which ones accept an ExcludeIncludeFieldsList.

Entity fetch example
The following example uses the default constructor of ExcludeIncludeFieldsList, which sets the ExcludeContainedFields property to true. This means that the fields added to it are meant to be excluded. It will fetch all Northwind customers but with some fields excluded, which are after the fetch loaded into the entities again.
C# VB.NET

// C# ExcludeIncludeFieldsList excludedFields = new ExcludeIncludeFieldsList(); excludedFields.Add(CustomerFields.ContactName); excludedFields.Add(CustomerFields.Country); // fetch a collection of customers. EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(); SortExpression sorter = new SortExpression(CustomerFields.CustomerId | SortOperator.Descending); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(customers, null, 0, sorter, null, excludedFields); } // fetch a single customer CustomerEntity c = new CustomerEntity("CHOPS"); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntity(c, null, null, excludedFields); } // ... // load the excluded fields into the entities already loaded: using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchExcludedFields(customers, excludedFields); adapter.FetchExcludedFields(c, excludedFields); } ' VB.NET Dim excludedFields As New ExcludeIncludeFieldsList() excludedFields.Add(CustomerFields.ContactName) excludedFields.Add(CustomerFields.Country) ' fetch a collection of customers. Dim customers As New EntityCollection(Of CustomerEntity)() Dim sorter As New SortExpression(CustomerFields.CustomerId Or SortOperator.Descending) Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, Nothing, 0, sorter, Nothing, excludedFields) End Using ' fetch a single customer Dim c As New CustomerEntity("CHOPS") Using adapter As New DataAccessAdapter() adapter.FetchEntity(c, Nothing, Nothing, excludedFields)

Page 265

End Using ' ... ' load the excluded fields into the entities already loaded: Using adapter As New DataAccessAdapter() adapter.FetchExcludedFields(customers, excludedFields) adapter.FetchExcludedFields(c, excludedFields) End Using

Prefetch path example
Excluding fields for fetches also works with prefetch paths. The following example illustrates that.
C# VB.NET

// C# ExcludeIncludeFieldsList excludedFields = new ExcludeIncludeFieldsList(); excludedFields.Add(OrderFields.OrderDate); // Fetch the first 25 customers with their orders which have the OrderDate excluded. PrefetchPath2 path = new PrefetchPath2(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders, excludedFields); EntityCollection<CustomerEntity> customers = new EntityCollection<CustomerEntity>(); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(customers, null, 25, null, path); } ' VB.NET Dim excludedFields As New ExcludeIncludeFieldsList() excludedFields.Add(OrderFields.OrderDate) ' Fetch the first 25 customers with their orders which have the OrderDate excluded. Dim path As New PrefetchPath2(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders, excludedFields) Dim customers As New EntityCollection(Of CustomerEntity)() Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(customers, Nothing, 25, Nothing, path) End Using

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 266

Generated code - Transactions, Adapter
Preface
Adapter supports, like SelfServicing, both COM+ and ADO.NET transactions. Below is explained how ADO.NET transactions are used in Adapter, and how COM+ transactions can be used in Adapter. COM+, or better: enterprise services, offers more than just transactional behavior. You can also use COM+ to enable Just-In-Time activation (JIT) or object pooling. The COM+ transaction section below has a small discussion on that as well. Adapter automatically uses ADO.NET transactions for recursive saves and save/delete actions for collections of entities, unless the adapter object is owned by a ComPlusAdapterContext object.

Normal native database transactions
(It's assumed the database used supports transactions, which is the case in all major databases like SqlServer). Native database transactions are provided by ADO.NET; it's a part of an ADO.NET connection object. Database statements can be executed within the same transaction by using the same connection object. LLBLGen Pro's native database transactions are implemented in the DataAccessAdapter object. You can simply start a transaction using a DataAccessAdapter object, execute methods of that DataAccessAdapter object and rollback or commit the transaction. Because the transactional code is inside the DataAccessAdapter, every method of the DataAccessAdapter object you call after you've started a transaction is ran inside that transaction, including stored procedure calls, entity fetches, entity collection saves etc. This greatly simplifies the programming of code using the variety of functionality the DataAccessAdapter object offers. You don't have to add entity objects or entity collection objects to the DataAccessAdapter object to make them participate in the transaction, just call a DataAccessAdapter method and it's inside the transaction of that particular DataAccessAdapter object. If you wish to run a particular action outside of a transaction, create a new DataAccessAdapter object for that particular action.

Note : If you start a new transaction by calling StartTransaction() the connection is kept open until Rollback() or Commit() is called. An example will help illustrate the usage of the transaction functionality of the DataAccessAdapter object. We're going to update 2 different entities in one transaction. For this example, it has to be done in one go, as an atomic unit, and therefore requires a transaction. The data is rather bogus, it's for illustration purposes only).
C# VB.NET

// [C#] // create adapter for fetching and the transaction. DataAccessAdapter adapter = new DataAccessAdapter(); // start the transaction. adapter.StartTransaction(IsolationLevel.ReadCommitted, "TwoUpates"); try { // fetch the two entities CustomerEntity customer = new CustomerEntity("CHOPS"); OrderEntity order = new OrderEntity(10254); adapter.FetchEntity(customer); adapter.FetchEntity(order);

Page 267

// alter the entities customer.Fax = "12345678"; order.Freight = 12; // save the two entities again. adapter.SaveEntity(customer); adapter.SaveEntity(order); // done adapter.Commit(); } catch { // abort, roll back the transaction adapter.Rollback(); // bubble up exception throw; } finally { // clean up. Necessary action. adapter.Dispose(); } ' [VB.NET] ' create adapter for fetching and the transaction. Dim adapter As New DataAccessAdapter() ' start the transaction. adapter.StartTransaction(IsolationLevel.ReadCommitted, "TwoUpates") Try ' fetch the two entities Dim customer As New CustomerEntity("CHOPS") Dim order As New OrderEntity(10254) adapter.FetchEntity(customer) adapter.FetchEntity(order) ' alter the entities customer.Fax = "12345678" order.Freight = 12 ' save the two entities again. adapter.SaveEntity(customer) adapter.SaveEntity(order) ' done adapter.Commit() Catch ' abort, roll back the transaction adapter.Rollback() ' bubble up exception Throw Finally ' clean up. Necessary action. adapter.Dispose() End Try First a DataAccessAdapter object is created and a transaction is started. As soon as you start the transaction, a database connection is open and usable. This is also the reason why you must include a final clause and call Dispose() when the DataAccessAdapter object is no longer needed. This is good practice anyway. The code is the same as if you didn't start a transaction, except for the Commit() / Rollback()

Page 268

combination at the end. Simply start a transaction, call the methods you want to call, and if no exceptions are caught, Commit(), otherwise Rollback(). It's best practice to embed the usage of a transaction in a try/catch/finally statement as it is done in the example above. This ensures that if something fails during the usage of the transaction, everything is rolled back or, at the end, everything is committed correctly.

Transaction savepoints
Most databases support transaction savepoints. Transaction savepoints make it possible to do fine grained transaction control on a semi-nested level. This can be required as ADO.NET doesn't support nested transactions. Savepoints let you define a point in a transaction to which you can roll back, without rolling back the complete transaction. This can be handy if you have saved some entities in a transaction which were saved OK, and another one fails, however the failure of that save shouldn't terminate the whole transaction, just roll back the transaction to a given point in the transaction. LLBLGen Pro offers you the ability to define savepoints in a transaction. The following example illustrates the savepoint functionality. It first saves a new address entity and after that it saves the transaction. It then saves a new customer entity but takes into account that this can fail. If it does, it should roll back to the savepoint set, it should thus not rollback the complete transaction. Consider the example an illustration for the feature, in your code, the code utilizing the transaction will probably span several classes and methods. Savepoints are not supported with COM+ transactions.
C# VB.NET

//C# DataAccessAdapter adapter = new DataAccessAdapter(); try { adapter.StartTransaction(IsolationLevel.ReadCommitted, "SavepointRollback"); // first save a new address AddressEntity newAddress = new AddressEntity(); // ... fill the address entity with values // save it. adapter.SaveEntity(newAddress, true); // save the transaction adapter.SaveTransaction("SavepointAddress"); // save a new customer CustomerEntity newCustomer = new CustomerEntity(); // ... fill the customer entity with values newCustomer.VisitingAddress = newAddress; newCustomer.BillingAddress = newAddress; try { adapter.SaveEntity(newCustomer, true); } catch(Exception ex) { // something was wrong. // ... handle ex here. // roll back to savepoint. adapter.Rollback("SavepointAddress"); }

Page 269

// commit the transaction. If the customer save failed, // only address is saved, otherwise both. adapter.Commit(); } catch { // fatal error, roll back everything adapter.Rollback(); throw; } finally { adapter.Dispose(); }

' VB.NET Dim adapter As new DataAccessAdapter() Try adapter.StartTransaction(IsolationLevel.ReadCommitted, "SavepointRollback") ' first save a new address Dim newAddress As New AddressEntity() ' ... fill the address entity with values ' save it. adapter.SaveEntity(newAddress, True) ' save the transaction adapter.SaveTransaction("SavepointAddress") ' save a new customer Dim newCustomer As New CustomerEntity() ' ... fill the customer entity with values newCustomer.VisitingAddress = newAddress newCustomer.BillingAddress = newAddress Try adapter.SaveEntity(newCustomer, True) Catch(Exception ex) ' something was wrong. ' ... handle ex here. ' roll back to savepoint. adapter.Rollback("SavepointAddress") End Try ' commit the transaction. If the customer save failed, ' only address is saved, otherwise both. adapter.Commit() Catch // fatal error, roll back everything adapter.Rollback() Throw Finally adapter.Dispose() End Try

Page 270

Note : Microsoft Access doesn't support savepoints in transactions, so this feature is not supported when you use LLBLGen Pro with MS Access.

COM+ transactions
LLBLGen Pro supports COM+ transactions for adapter as well through a special COM+ class called ComPlusAdapterContext. This class is generated in the DataAccessAdapter class file and is thus located in the database specific project. The ComPlusAdapterContext class is actually a thin wrapper class which implements abstract methods in its base class ComPlusAdapterContextBase. The ComPlusAdapterContext class embeds a DataAccessAdapter object which will act the same as a normal DataAccessAdapter object, only this time it will utilize the COM+ context held by the ComPlusAdapterContext instance. Because the ComPlusAdapterContext class takes care of the database connection creation, the COM+ context held by the ComPlusAdapterContext will make sure that any database activity which uses the connection objects created by the ComPlusAdapterContext class are monitored by COM+ (the MS DTC service). The ComPlusAdapterContext class is just one example how COM+ can be used together with Adapter. You can also create your own class, deriving from ComPlusAdapterContextBase and add different EnterpriseServices attributes to it to enable different services offered by COM+, like Just-In-Time activation or object pooling. Also you can for example grab the connection string from the COM+ object definition defined in Windows' Component Services. Because COM+ is implemented in .NET using Enterprise Services, the class using the ComPlusAdapterContext object has to derive from ServicedComponent. This way, transactions started outside the class using the ComPlusAdapterContext class can flow through to the ComPlusAdapterContext class to the actions performed by the DataAccessAdapter object inside the ComPlusAdapterContext object. Also, you have to reference the System.EnterpriseServices namespace in your code. Below is a short example how a COM+ transaction flows through to a ServicedComponent derived class and the code inside the TestComPlus() method will be running inside the COM+ transaction. When no COM+ transaction is available, a new one is created.
C# VB.NET

// [C#] [Transaction(TransactionOption.Required)] public class TestClass : ServicedComponent { [AutoComplete] public void TestComPlus() { ComPlusAdapterContext comPlusContext = new ComPlusAdapterContext(); IDataAccessAdapter adapter = comPlusContext.Adapter; try { // start transaction. adapter.StartTransaction(IsolationLevel.ReadCommitted, "ComPlusTran"); CustomerEntity customer = new CustomerEntity("CHOPS"); OrderEntity order = new OrderEntity(10254); adapter.FetchEntity(customer); adapter.FetchEntity(order); // alter the entities customer.Fax = "12345678"; order.Freight = 12;

Page 271

// save the two entities again. adapter.SaveEntity(customer); adapter.SaveEntity(order); // done adapter.Commit(); } catch { // abort adapter.Rollback(); throw; } finally { comPlusContext.Dispose(); ((DataAccessAdapter)adapter).Dispose(); } } } ' [VB.NET] <Transaction(TransactionOption.Required)> _ Public Class TestClass Inherits ServicedComponent <AutoComplete> _ Public Sub TestComPlus() Dim comPlusContext As New ComPlusAdapterContext() Dim adapter As IDataAccessAdapter = comPlusContext.Adapter Try ' start transaction. adapter.StartTransaction(IsolationLevel.ReadCommitted, "ComPlusTran") Dim customer As New CustomerEntity("CHOPS") Dim order As New OrderEntity(10254) adapter.FetchEntity(customer) adapter.FetchEntity(order) ' alter the entities customer.Fax = "12345678" order.Freight = 12 ' save the two entities again. adapter.SaveEntity(customer) adapter.SaveEntity(order) ' done adapter.Commit() Catch ' abort adapter.Rollback() Throw Finally comPlusContext.Dispose(); ((DataAccessAdapter)adapter).Dispose(); End Try End Sub End Class

Page 272

Note : COM+ transactions are considered 'advanced material' in .NET applications. Use them with care. You have to give your assemblies a strong name and your application will cause extra overhead on your machine: every serviced component has a context in the COM+ service. Most of the time you can fulfill your transactional requirements using native database transactions with the normal ADO.NET transactions provided by the DataAccessAdapter, as illustrated in the previous section.

.NET 2.0: System.Transactions support
.NET 2.0 introduces the System.Transactions namespace. This is a namespace with the TransactionScope class, which eases the creation of distributed transactions, specifying a given scope. All transactions, for example normal ADO.NET transactions, are automatically elevated to distributed transactions, if required, by the TransactionScope they're declared in. This requires support by the used database system as the database system has to be able to promote a non-distributed transaction to a distributed transaction. When .NET 2.0 shipped, only SqlServer 2005 was able to promote transactions to distributed transactions using System.Transactions' classes. The developer can define such a TransactionScope using the normal .NET constructs, like

using(TransactionScope scope = new TransactionScope()) { // your code here. }

A DataAccessAdapter object is able to determine if it's participating inside an ambient transaction of System.Transactions. If so, it enlists a Resource Manager with the System.Transactions transaction. The Resource manager contains the DataAccessAdapter object. As soon as a Transaction or DataAccessAdapter is enlisted through a Resource Manager, the Commit() and Rollback() methods are setting the ResourceManager's commit/abort signal which is requested by the System.Transactions' Transaction manager. If multiple transactions are executed on a DataAccessAdapter and one rolled back, the resource manager will report an abort. As soon as the DataAccessAdapter is enlisted in the System.Transactions.Transaction, no ADO.NET transaction is started, it's a no-op. Once one rollback is requested, the transaction will always report a rollback to the MSDTC.

Going out of scope
When the System.Transactions transaction is committed or rolled back, the Resource manager is notified and will then notify the DataAccessAdapter that it can commit/rollback the transaction. That call will then notify the enlisted entities of the outcome of the transaction.

Multiple transactions executed using a single DataAccessAdapter object
For the DataAccessAdapter it will look like its still inside the same transaction, so no new transaction is started. This will make sure that an entity which is already participating in the transaction isn't enlisted again and the field values aren't saved again etc.

Example
Below is an example which shows the usage of a TransactionScope in combination of a DataAccessAdapter object. The code contains Assert statements to illustrate the state / outcome of the various statements.
C# VB.NET

// C# CustomerEntity newCustomer = new CustomerEntity(); // fill newCustomer's fields.

Page 273

// .. AddressEntity newAddress = new AddressEntity(); // fill newAddress' fields. // .. // start the scope. using( TransactionScope ts = new TransactionScope() ) { // as we're inside the transaction scope, we can now create a DataAccessAdapter object and // start a connection + transaction. The connection + transaction will be enlisted thro ugh a // resource manager in the TransactionScope ts and will be controlled by that Transacti onScope. using(DataAccessAdapter adapter = new DataAccessAdapter()) { // save 2 entities non-recursive. This should be done in one // transaction, namely the transaction scope we've started. newCustomer.VisitingAddress = newAddress; newCustomer.BillingAddress = newAddress; Assert.IsTrue( adapter.SaveEntity( newCustomer, true) ); // save went well, alter the entities, which are fetched back, and // save again. newCustomer.CompanyEmailAddress += " "; newAddress.StreetName += " "; Assert.IsTrue( adapter.SaveEntity( newCustomer, true ) ); } // do not call Complete, as we want to rollback the transaction and see if the rollback indeed succeeds. // as the TransactionScope goes out of scope, the on-going transaction is rolled back. } // at this point the transaction of the previous using block is rolled back. // let the DTC and the system.transactions threads deal with the objects. // this sleep is only needed because we're going to access the data directly after the r ollback. In normal code, // this sleep isn't necessary. Thread.Sleep( 1000 ); // test if the data is still there. Shouldn't be as the transaction has been rolled back . using( DataAccessAdapter adapter = new DataAccessAdapter() ) { CustomerEntity fetchedCustomer = new CustomerEntity( newCustomer.CustomerId ); Assert.IsFalse( adapter.FetchEntity( fetchedCustomer ) ); AddressEntity fetchedAddress = new AddressEntity( newAddress.AddressId ); Assert.IsFalse( adapter.FetchEntity( fetchedAddress ) ); Assert.AreEqual( 0, newAddress.AddressId ); } ' VB.NET Dim NewCustomer As New CustomerEntity() ' fill NewCustomer's fields. ' .. Dim NewAddress As New AddressEntity() ' fill NewAddress' fields. ' .. ' start the scope. Using ts As New TransactionScope()

Page 274

' as we're inside the transaction scope, we can now create a DataAccessAdapter object a nd ' start a connection + transaction. The connection + transaction will be enlisted throu gh a ' resource manager in the TransactionScope ts and will be controlled by that Transactio nScope. Using adapter As New DataAccessAdapter() ' save 2 entities non-recursive. This should be done in one ' transaction, namely the transaction scope we've started. NewCustomer.VisitingAddress = NewAddress NewCustomer.BillingAddress = NewAddress Assert.IsTrue( adapter.SaveEntity( NewCustomer, True) ) ' save went well, alter the entities, which are fetched back, and ' save again. NewCustomer.CompanyEmailAddress = NewCustomer.CompanyEmailAddress & " " NewAddress.StreetName = NewAddress.StreetName & " " Assert.IsTrue( adapter.SaveEntity( NewCustomer, True ) ) End Using ' do not call Complete, as we want to rollback the transaction and see if the rollback indeed succeeds. ' as the TransactionScope goes out of scope, the on-going transaction is rolled back. End Using ' at this point the transaction of the previous using block is rolled back. ' let the DTC and the system.transactions threads deal with the objects. ' this sleep is only needed because we're going to access the data directly after the ro llback. In normal code, ' this sleep isn't necessary. Thread.Sleep( 1000 ) ' test if the data is still there. Shouldn't be as the transaction has been rolled back. Using adapter As New DataAccessAdapter() Dim fetchedCustomer As New CustomerEntity( NewCustomer.CustomerId ) Assert.IsFalse( adapter.FetchEntity( fetchedCustomer ) ) Dim fetchedAddress = New AddressEntity( NewAddress.AddressId ) Assert.IsFalse( adapter.FetchEntity( fetchedAddress ) ) Assert.AreEqual( 0, NewAddress.AddressId ) End Using
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 275

Generated code - Databinding with Windows Forms and ASP.NET 1.x, Adapter
Preface
Databinding is a .NET feature which can drastically increase productivity when it is implemented correctly. It is also a technique that works both with single objects and properties as well with collections with objects. To be able to use databinding with the generated code, both the entity objects and the entity collections are made databinding aware. This section describes the databinding functionality addressed in the generated code and also the design time functionality available to you for windows forms applications and ASP.NET 1.x applications. As there are big difference between the databinding mechanism implemented in .NET 1.x and .NET 2.0, some paragraphs are for a specific .NET version.

Implemented functionality
This section briefly discusses the implemented functionality in the various classes of the generated code you will use with databinding. .NET 1.x specific: functionality implemented in Entity classes In the .NET 1.x code, entity classes are equipped with field nameChanged events, which are necessary for databinding. This means that if you bind a property of an entity object to a textbox in your GUI, the textbox is automatically updated when the entity's field's value is changed using other code than the textbox. Entity classes furthermore implement ICustomTypeDescriptor. Every entity implements IEditableObject. .NET 2.0 specific: functionality implemented in Entity classes In the .NET 2.0 code, entity classes don't have the fieldChanged events, they implement the new .NET 2.0 interface INotifyPropertyChanged. This interface replaces the .NET 1.x events which were necessary for databinding. The interface INotifyPropertyChanged serves the same purpose: when a property changes, it raises an event and bound controls can update themselves if they're interested. This means that if you bind a property of an entity object to a textbox in your GUI, the textbox is automatically updated when the entity's field's value is changed using other code than the textbox. Every entity implements IEditableObject. Typed views and typed lists Typed View and Typed List objects are generated as classes deriving from DataTable, and because the DataTable already is equipped with all the databinding functionality necessary, you can bind a Typed View or Typed List without trouble to a datagrid or to a set of GUI controls. EntityCollection instances The EntityCollection class (generic and non-generic) in LLBLGen Pro implement the IListSource interface. This means that bound controls will request from the interface an object they can bind to. Every entity collection returns its DefaultView when this request comes. This thus means that when you set a control's DataSource property to an entity collection instance, the collection won't bind directly to the control though it will bind to the control through its DefaultView. This also means that if you create your own EntityView2 instance on a given entity collection, you can bind that EntityView2 to the control instead, to have a subset of the data in an entity collection visible in the control. This is similar to how DataTable and DataView work hand in hand. All actions taken on the data, including creating new entities in a grid for example, are performed on the entity collection. When you're using your own EntityView2 instance, be sure to set the DataChangeAction property of the EntityView2 instance to the correct value. For details, please see: Generated code - using the EntityView2 class for details. When an entity has 1:m or m:n relations with other entities, it will expose properties which in turn will return an EntityCollection (non-generic in .NET 1.x, generic in .NET 2.0). For example the customer entity

Page 276

has an 'Orders' property which returns a collection of OrderEntity objects. These collection properties are shown in the DataGrid when AllowNavigation is set to true and when you click open the [+] in front of the entity row. You can then click on one of the collection returning properties and the grid will be filled with the entity rows of that collection. This way you can browse a complete object model, just by walking relations. Adapter doesn't read data 'on the fly', but requires you to read data up-front to avoid ties with databaseinteraction code in tiers. Therefore if you want to traverse data in a datagrid, you need to fetch this data upfront prior to binding the EntityCollection to the grid, for example by using Prefetch Paths. EntityView2 implements all useful properties and methods of IBindingList. Among these features are: sorting in grids, making the EntityView2 read-only, do not allow addition, removal of rows. Table styles and mapping names When you want to setup table styles in grids, you've to specify the name of the bound object in the MappingName property of the table style. The following rule is used for this mapping name when you bind an entity collection to a grid: The list name of an entitycollection bound to a grid is constructed from the following parts: the LLBLGenProEntityName of an instance created by the factory set for the collection + "Collection". So if the entity factory is set to CustomerEntityFactory, the name is "CustomerEntityCollection".

Design time support in VS.NET 2002/2003
The generated code supports design time databinding out of the box for the EntityCollection, typed views and typed lists and all derived classes, in .NET 1.x and .NET 2.0. To use design time databinding, open a form (which can be a webform or a windows forms form) in design mode and open the toolbox in your IDE (for example Visual Studio.NET). Select the tab for My Controls and select Add/Remove Items.... In the dialog, select Browse... and select the compiled assembly of the generated code. As soon as the IDE (for example Visual Studio.NET) has investigated which components are available in the selected assembly, it will check and select all of them available (which will be EntityCollection, all typed views and all typed lists). Make sure the selection meets your needs and click OK, which will add all the selected components to the toolbox. You can now select one of the components and drag it onto the form, for example the EntityCollection class. The IDE will place the collection instance in the component area for the form, which is normally at the bottom of the screen. As soon as you drag an EntityCollection class onto your form, a designer will pop up with all IEntityFactory2 implementing classes in the generated code. For example, if the generated code is for Northwind, you can select the CustomerEntityFactory. As soon as you've selected the factory you want, the EntityCollection instance in the component area of the form will be able to produce property descriptors and you will be able to select it as the proper datasource for a grid on your form. If you want to check which EntityFactory is currently set in the EntityCollection instance, click it and view the properties window in the IDE, which will show you, among other properties, the EntityFactoryName property. To set it to another EntityFactory, click the EntityCollection instance with the right mouse button and select "Set EntityFactory to use" from the context menu, which will bring up the designer again. To reflect the columns in the grid, first select 'None' as DataSource and then re-select the EntityCollection instance as the DataSource value. If you have a grid control located on the form, you can select its DataSource property and set it to the component you've dragged onto the form. When you're designing a windows forms form, you can also select the DataMember value. For example if you've selected the CustomerEntityFactory as the factory to use, the EntityCollection will contain CustomerEntity instances, which contain an EntityCollection for Employees, for Orders etc. which'll show up in the DataMember list. Using this feature, you can rapidly setup gui's which are bound to classes in the generated code. Because the design time databinding will create an instance of the class dragged onto the form, for example EntityCollection, which instance will have a valid instance of the factory you've set at design time, for example CustomerEntityFactory, you only have to add the calls to the DataAccessAdapter.FetchEntityCollection method to fill the form with data at runtime.

Design time support in VS.NET 2005
In VS.NET 2005, design time databinding for Windows forms works as described above in the previous section, Design time support in VS.NET 2002/2003, however some things are made easier for you. VS.NET 2005 will automatically find the generated, non-generic EntityCollection class in your project's generated code as well as the typedlists and typed views. If you don't get the EntityCollection class in your toolbox, please follow the procedure of the previous section to add them to the toolbox. It's important to use the generated EntityCollection class for design time databinding, as design time databinding in VS.NET 2005 doesn't support generics, which means that you can't add the EntityCollection(Of TEntity) from the

Page 277

ORMSupportClasses to the toolbox and drag that class onto a form. In VS.NET 2005, you can still directly bind an EntityCollection to a grid control, but it's recommended to use a BindingSource control, which is new in .NET 2.0. To setup a .NET 2.0 DataGridView control, not only drag the EntityCollection onto your form but also a BindingSource control. You then set the DataGridView's DataSource property to the BindingSource control and the BindingSource' DataSource property to the EntityCollection dragged onto the form. The EntityCollection class then doesn't have a factory set so use the the smart-tag on the EntityCollection on the form to configure the collection which will show the EntityCollection's designer. After you've selected the factory to use, you should see the DataGridView setup with the columns of the entity type contained in the bound EntityCollection. ASP.NET got a completely different databinding framework in .NET 2.0 and it requires a different approach. To read more about ASP.NET 2.0 databinding at design time and runtime, please see: Generated code Databinding with ASP.NET 2.0

Databinding and inheritance
When using an inheritance hierarchy, you typically have subtypes with more fields than the supertypes. As LLBLGen Pro supports polymorphic fetches, it can be in an entity collection, entities of various types (all from the same inheritance hierarchy) are found. If several of these entity types have fields not found in their supertypes or siblings in the hierarchy, what will show up in a grid if such a collection is bound to that grid? LLBLGen Pro will provide to the bound control properties for all fields which are found in the type set for the collection. For example, if you've a hierarchy like Employee <- Manager <- BoardMember, and you fetch all Manager entities into an EntityCollection with ManagerEntityFactory set as its factory. It then can be that one or more of the Manager entities in the collection are actually of type BoardMember, due to Polymorphic Fetching. BoardMember entities have for example a field called CompanyCarId, and Manager entities don't have that field. Binding the fetched EntityCollection to a grid, only the fields from the Manager entity will show up in the grid, as that's the type set for the EntityCollection bound, due to the set ManagerEntityFactory. You could argue to show 'null' for fields not in Manager but in subtypes, however what if two types derived from Manager, both having a different set of new fields not present in the Manager entity? It would be unclear when a user should fill in a field or not, as for each field a column would be created in the case when all fields in all types in the collection are shown in the grid. Hence, the display of solely the fields of the type set for the collection.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 278

Generated code - Databinding with ASP.NET 2.0, Adapter
Preface
In ASP.NET 2.0 things have changed drastically compared to ASP.NET 1.x. In ASP.NET 2.0, databinding is now fully 2-way, and can be setup declaratively, which means you can setup databinding completely in HTML, without the necessity of code in the code-behind file. This section is about databinding with a set of data using a DataSource control. .NET 2.0 ships with a couple of these controls, like SqlDataSource control and for example the ObjectDataSource control.

LLBLGen Pro datasource controls
LLBLGen Pro ships with its own DataSource controls: LLBLGenProDataSource for selfservicing and LLBLGenProDataSource2 for adapter. These also are located in the SD.LLBLGen.Pro.ORMSupportClasses dll for .NET 2.0 and you have to add them to the toolbox first. This can be done manually by right-clicking the toolbox when a webform is open in the editor (HTML or design view) and then by selecting 'Choose items...' which allows you to browse to the ORMSupportClasses dll for .NET 2.0. It's key you add the ORMSupportClasses dll version you're also using in your ASP.NET project, as VS.NET can load only one version of a given assembly and this could lead to type clashes in the webform designer. This is a problem with VS.NET 2005 and which is almost entirely solved by installing VS.NET 2005 SP1.

Getting started with the LLBLGenProDataSource2 control
Dragging the datasource control of choice onto a form in design mode will open show you the smart tag to configure the datasource. It's key to first reference the correct ORMSupportClasses library in your web project. The LLBLGenProDataSource2 control can be used for an EntityCollection, TypedList or TypedView. It's recommended you use the smart-tag to setup the LLBLGenProDataSource2 control, however you can also set it up in the HTML. To get the idea what you've to setup, it's easier to first setup a couple of LLBLGenProDataSource2 controls through its designer by using the smart-tag and then when you see what the HTML looks like, you can from then on reproduce it in HTML. To learn more about the specific properties of the LLBLGenProDataSource2 control, please consult the LLBLGen Pro reference manual for the LLBLGenProDataSource2 control, which is in the ORMSupportClasses assembly. The LLBLGenProDataSource2 control accepts a type specification which is used for the particular container type: so an entity factory for the EntityCollection, the type of a typedlist or the type of a typedview. The container type, thus what kind of object is contained in the LLBLGenProDataSource2 control, is specified using the property DataContainerType , accessable in the designer for the LLBLGenProDataSource2 control and also in the property grid of VS.NET. The contained object itself is exposed through the property belonging to the value of this property: so if the DataContainerType is set to EntityCollection, an EntityCollection is inside the datasourcecontrol and the EntityCollection property is valid, though when DataContainerType is set to TypedList , a TypedList object is contained in the LLBLGenProDataSource2 control, and if you then read the property EntityCollection , for example in the code behind, it will throw an InvalidOperationException, as it's not defined, you should have read the TypedList property instead. The GroupBy property is suppored in TypedList/TypedView scenario's while Prefetch path objects are supported in EntityCollection scenarios.

Caching of data.
The LLBLGenProDataSource2 control contains a data containing object, be it an entity collection, TypedList or TypedView. This data is cached in-between postbacks, until the data has to be refreshed. You can also disable this caching by setting it to None. If caching is enabled (default), the place where this data is cached is either in the viewstate, ASP.NET cache or the session. Which one, Viewstate, ASP.NET cache or session, depends on the value of the LLBLGenProDataSource2 control's property CacheLocation . This property is of type DataSourceCacheLocation which is an enum and which can be 'ViewState', 'ASPNetCache', 'Session' or 'None'. Viewstate is the default. If the CacheLocation is set to Session, the data is stored in the session

Page 279

object with a key with the following name: __LLBLGENPRODATASOURCEDATA_controlUniqueID_BindingContainerName ControlUniqueID is the UniqueID of the DataSource control on the page. BindingContainerName is the name of the container the control is located in. This key is stored in the control state and is always preserved. If the CacheLocation is set to ASPNetCache, the data is stored in the ASP.NET Cache using the following key: __LLBLGENPRODATASOURCEDATA_Guid where the Guid is a new Guid per LLBLGenProDataSource2 control instance. You can control the ASP.NET Cache duration as well. Use for that the LLBLGenProDataSource control property ASPNetCacheDuration which indicates in minutes the time the cached data should stay in the cache. Recommended is to keep it as long as a session duration so users won't run into missing data if there's some delay between page render and page postback. Default is 20 minutes. Disabling caching, by setting it to DataSourceCacheLocation.None, has the effect that if the LLBLGenProDataSource2 control is forced to fetch data by a call to ExecuteSelect, it always will refetch the data from the database. This can lead to different data in the page, so you should be aware of this when using the None setting. As no data is cached, a bound control should use the data in a read-only fashion. When the CacheLocation is set to None, the LLBLGenProDataSource2 control will still cache its state somewhere, and it will use the ViewState for that.

Setting the data container manually.
The controls offer properties (EntityCollection, TypedList, TypedView) to set the contained object to an external object, for example an EntityCollection object you've fetched in a method in your business logic tier. This also offers the ability to for example bind myCustomer.Orders to grids through the datasource controls by simply setting the property EntityCollection in the code behind file of the webform. You can do this too with TypedLists by setting the TypedList property and with TypedViews by setting the TypedView property. Be sure to first set the DataContainerType property to the right container type, i.e. EntityCollection, TypedList or TypedView.

Two way databinding.
Two way databinding can be done with the DataContainerType set to EntityCollection . This means the LLBLGenProDataSource2 control manages not only the binding of data to a control but also the manipulation of data through the bound control (e.g. a GridView) of the data in the contained EntityCollection. As TypedList and TypedView classes are read-only by definition, you can't manipulate data inside these classes through the LLBLGenProDataSource2 control. How the LLBLGenProDataSource2 control fetches data (automatically or through code placed inside an eventhandler) as well as how it saves data (automatically or through event handlers) is discussed in the next section.

LivePersistence and events
A property of the LLBLGenProDataSource2 control, called LivePersistence , which can be true or false, is used to signal the LLBLGenProDataSource2 control to perform the select, insert, update and delete actions directly on the database (true) or modify the entity collection and add the actions to a UnitOfWork object (false), and to use external methods to fetch data for typedlist fetches, typedview fetches and EntityCollection fetches. If LivePersistence is set to false, fetching data and saving data isn't performed by the LLBLGenProDataSource2 control but an event is raised instead (see below, Intercepting activity ) which passes an event arguments object which contains the parameters for the fetch or save and allows own code to perform the fetch or save. In any case, the changed event is raised so the bound control(s) can refetch the data from the LLBLGenProDataSource2 control. LivePersistence causes 3 events to be raised: PerformSelect . This event is raised when the LLBLGenProDataSource2 control needs to retrieve data from the database. Use the passed-in PerformSelectEventArgs2 object to perform the Fetch action. You should fetch the data into the appropriate object inside the PerformSelectEventArgs2 object, for example the ContainedCollection using the parameters available in the PerformSelectEventArgs2 object. PerformGetDbCount . This event is raised when the LLBLGenProDataSource2 control needs to retrieve the number of items in the complete resultset. This event is raised when server side paging is enabled and the total number of items in the resultset is required by the bound control(s). Your handler should

Page 280

fetch the count of the set to fetch by using the passed in PerformGetDbCountEventArgs2 object which contains all information necessary for the retrieval of the count value. Set the DbCount property of the PerformGetDbCountEventArgs2 object to the value read from the database. Be sure to pass to the DataAccessAdapter. GetDbCount call the filter, prefetch path, sort expression and other objects available to you via the passed-in PerformGetDbCountEventArgs2 object. PerformWork . This event is raised when ExecuteInsert/Update/Delete is called on the LLBLGenProDataSource2 control by a bound control. Typically this is done after an entity is edited in a GridView or FormView control for example, or a new entity is added through a bound control. The work is tracked in a UnitOfWork2 object which is available to you in the passed in PerformWorkEventArgs2 object. When a refetch of the data has taken place, the UnitOfWork2 object contained in the control is cleared. This means that any update/insert/delete work pending has to be completed by that point. Please examine the events and special event argument classes in the LLBLGen Pro Reference manual.

Refetch
To ensure fresh data from the database. is retrieved, a flag on the LLBLGenProDataSourceControl2 called Refetch can be set to true, so the DefaultView will refetch the data, even if for example the page number is the same. This can be necessary if the code-behind code decides the data represented by the LLBLGenProDataSource2 control is invalidated and has to be refetched from the database.

Using the LLBLGenProDataSource2 control.
Binding a LLBLGenProDataSource2 control is simple, just add the ID as DataSourceID in the bound control's HTML, or select the LLBLGenProDataSource2 control from the dropdownbox for available datasources in the bound control's smart-tag. The designer of the LLBLGenProDataSource2 control will allow the user to select the DataAccessAdapter, the DataContainerType and the type of object needed for the container (factory, typedlist or typedview type). By default no filter is set, no groupby, no prefetch path, no sorting. The EntityView2 object returned by the contained EntityCollection's DefaultView method is returned from the EntityCollection contained in the LLBLGenProDataSource2 control. To set a filter, prefetch path or sort expression, a code behind page is required to set these parameters. This is due to the requirement that these are compile time checked. You can however also produce filters at runtime by using the ASP.NET 2.0 parameter binding feature. This allows you to setup a binding between a control (or cookie, form etc.) which produces a value and the datasource control so the value produced by the other control is used for filtering. See the example below which uses two drop down boxes to create a filter at runtime for fetching data. Paging is supported as well: if you want server-side paging, define the paging parameters on the LLBLGenProDataSource2 in the VS.NET property grid. If you want paging inside your bound control, define the paging parameters in the bound control, for example the GridView. To be able to work with paging, you of course have to enable paging on the bound control.

Intercepting activity
As described above, the LLBLGenProDataSource2 control has various events to bind to to intercept what's going on inside the LLBLGenProDataSource2 control at runtime. For the PerformSelect event, the event arguments contain what the container type is, the container, and all the parameters to perform the fetch. Not all parameters of the argument are valid in every situation, for example a typedlist fetch requires a field-set but doesn't need an entity collection. The event arguments for the event PerformGetDbCount contain an integer called DbCount which should contain the determined count after the event has been handled. The event arguments for the PerformWork event contain a UnitOfWork2 object with the work to be performed. Typically this is a single entity, as every time the bound control updates or deletes an entity, the PerformWork event is raised, due to the postback nature of ASP.NET.

The PerformWork event in an AJAX environment
When you use a grid like the DevExpress ASPxGrid for .NET 2.0, you can have all edit activities on the client side, using AJAX communication with the server. When you've setup the grid to be used on the client side, and the LLBLGenProDataSource2 control has LivePersistence set to false, the changes made to the data on the client will be performed in one go when the page gets a post-back. In this scenario, it's more efficient not to bind to PerformWork, but to place a button on the form which simply performs the 'save', e.g. it says "Save changes". By not binding to the PerformWork event, all ExecuteInsert/Update/Delete actions will take place on the data, but aren't propagated to the database just yet, because LivePersistence is set to false. In the button handler of your save button, you then retrieve the UnitOfWork2 object from the

Page 281

LLBLGenProDataSource2 control which contains all changes made and you can commit the changes in one transaction.

Filtering on the fly
The LLBLGenProDataSource2 control supports filtering on the fly based on parameters specified which can retrieve values from other controls, forms, cookies or other objects supported by the SelectParameters feature of ASP.NET 2.0 which is also available in a LLBLGenProDataSource2 control via its SelectParameters property. The SelectParameters property follows the same specification as the ObjectDataSource and are specified declaratively using the <SelectParameters> tag inside a LLBLGenProDataSource2 control declaration. You can also use the VS.NET designer for setting up the SelectParameters, to do that simply click the [...] button next to SelectParameters in the Property grid of VS.NET. Be sure that the parameter name has the same name as a field in the entity, typed list row or typed view row. You can't use SelectParameters to build a filter on non-entity fields as it builds a filter for the next fetch. The following example shows this in action.

Trapping invalid input values
As the LLBLGenProDataSource2 control is the receiver of the data filled into a form, grid or other bound control, it can be these values are invalid, for example they don't match the type of the entity field or they are too big in size. By default, the LLBLGenProDataSource2 control will ignore these values and won't throw an exception. To make the control throw an exception after all values have been evaluated, you can set the LLBLGenProDataSource2 property ThrowExceptionOnIllegalFieldInput to true, which signals the control to throw an exception if one or more fields received an illegal value which wasn't convertable to the type of the field in an update/insert scenario. If set to true all illegal values are collected and added to one single ORMValueTypeMismatchException so your code will receive just one exception to handle them all. If set to false, the illegal values are ignored and the fields don't get set to a new value, which was the default behavior in v2.0.

Usage examples
Below are two examples: one example using LivePersistence set to true and one using LivePersistence set to false. They're basicly the same form. For VB.NET users: the HTML is for C#, you've to change the first line of the given HTML snippets into the following line to use it with VB.NET: <%@ Page Language="VB" AutoEventWireup="true" CodeFile="Default.aspx.vb" Inherits="_Defau lt" %>

Example using LivePersistence
This example contains a form, which filters a list of Order entities, provided by the LLBLGenProDataSource2 control orderDS , on the OrderEntity field ShippingCountry , provided by a static drop down box, and a drop down box with all customers from Northwind, provided by LLBLGenProDataSource2 control customerDS . Using parameter binding the two dropdown boxes produce filter information at runtime to produce the proper order list. No code required, it's completely declarative HTML. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Defau lt" %> <%@ Register Assembly="SD.LLBLGen.Pro.ORMSupportClasses" Namespace="SD.LLBLGen.Pro.ORMSupportClasses" TagPrefix="llblgenpro" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> All customers:<br /> Customers: <asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="customerDS" DataTextField="CompanyName" DataValueField="CustomerId" AutoPostBack="True"> </asp:DropDownList><br /> ShipCountry: <asp:DropDownList ID="DropDownList2" runat="server" AutoPostBack="True"> <asp:ListItem>Spain</asp:ListItem>

Page 282

<asp:ListItem>Germany</asp:ListItem> </asp:DropDownList><br /> <llblgenpro:llblgenprodatasource2 id="customerDS" runat="server" cachelocation="Session" datacontainertype="EntityCollection" enablepaging="Tr ue" AdapterTypeName="NW20.DatabaseSpecific.DataAccessAdapter, NW20DBSpecific" EntityFactoryTypeName="NW20.FactoryClasses.CustomerEntityFactory, NW20"> </llblgenpro:llblgenprodatasource2> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourc eID="orderDS" DataKeyNames="OrderId" AllowPaging="True" PageSize="5"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="ShipAddress" HeaderText="ShipAddress" SortExpr ession="ShipAddress" /> <asp:BoundField DataField="ShipName" HeaderText="ShipName" SortExpression ="ShipName" /> <asp:BoundField DataField="ShipCountry" HeaderText="ShipCountry" SortExpr ession="ShipCountry" /> <asp:BoundField DataField="CustomerId" HeaderText="CustomerId" SortExpres sion="CustomerId" /> <asp:BoundField DataField="ShipRegion" HeaderText="ShipRegion" SortExpres sion="ShipRegion" /> <asp:BoundField DataField="ShipCity" HeaderText="ShipCity" SortExpression ="ShipCity" /> <asp:BoundField DataField="OrderId" HeaderText="OrderId" SortExpression=" OrderId" /> <asp:BoundField DataField="ShipPostalCode" HeaderText="ShipPostalCode" So rtExpression="ShipPostalCode" /> </Columns> </asp:GridView> <llblgenpro:llblgenprodatasource2 id="orderDS" runat="server" cachelocation="Sess ion" datacontainertype="EntityCollection" enablepaging="True" AdapterTypeName="NW20.DatabaseSpecific.DataAccessAdapter, NW20DBSpecific" EntityFactoryTypeName="NW20.FactoryClasses.OrderEntityFactory, NW20"> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="CustomerId" PropertyName="SelectedValue" Type="String" /> <asp:ControlParameter ControlID="DropDownList2" Name="ShipCountry" PropertyName="SelectedValue" Type="String" /> </SelectParameters> </llblgenpro:llblgenprodatasource2> </form> </body> </html>

Example using Perform Event handlers
The following example is the same form as in the previous example, with the same functionality, however now it uses LivePersistence set to false, which means we've to perform the persistence logic ourselves by writing event handlers. The HTML is shown first, which is roughly the same as the previous example's HTML except it defines bindings to EventHandlers for PerformGetDbCount, PerformSelect and PerformWork. After that the code behind in VB.NET and C# is shown. HTML page

Page 283

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Defau lt" %> <%@ Register Assembly="SD.LLBLGen.Pro.ORMSupportClasses" Namespace="SD.LLBLGen.Pro.ORMSupportClasses" TagPrefix="llblgenpro" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> All customers:<br /> Customers: <asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="customerDS" DataTextField="CompanyName" DataValueField="CustomerId" AutoPostBack="True"> </asp:DropDownList><br /> ShipCountry: <asp:DropDownList ID="DropDownList2" runat="server" AutoPostBack="True"> <asp:ListItem>Spain</asp:ListItem> <asp:ListItem>Germany</asp:ListItem> </asp:DropDownList><br /> <llblgenpro:llblgenprodatasource2 id="customerDS" runat="server" cachelocation="Session" datacontainertype="EntityCollection" enablepaging="Tr ue" AdapterTypeName="NW20.DatabaseSpecific.DataAccessAdapter, NW20DBSpecific" EntityFactoryTypeName="NW20.FactoryClasses.CustomerEntityFactory, NW20" LivePersistence="False" OnPerformSelect="customerDS_PerformSelect"> </llblgenpro:llblgenprodatasource2> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourc eID="orderDS" DataKeyNames="OrderId" AllowPaging="True" PageSize="5"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="ShipAddress" HeaderText="ShipAddress" SortExpr ession="ShipAddress" /> <asp:BoundField DataField="ShipName" HeaderText="ShipName" SortExpression ="ShipName" /> <asp:BoundField DataField="ShipCountry" HeaderText="ShipCountry" SortExpr ession="ShipCountry" /> <asp:BoundField DataField="CustomerId" HeaderText="CustomerId" SortExpres sion="CustomerId" /> <asp:BoundField DataField="ShipRegion" HeaderText="ShipRegion" SortExpres sion="ShipRegion" /> <asp:BoundField DataField="ShipCity" HeaderText="ShipCity" SortExpression ="ShipCity" /> <asp:BoundField DataField="OrderId" HeaderText="OrderId" SortExpression=" OrderId" /> <asp:BoundField DataField="ShipPostalCode" HeaderText="ShipPostalCode" So rtExpression="ShipPostalCode" /> </Columns> </asp:GridView> <llblgenpro:llblgenprodatasource2 id="orderDS" runat="server" cachelocation="Sess ion" datacontainertype="EntityCollection" enablepaging="True" AdapterTypeName="NW20.DatabaseSpecific.DataAccessAdapter, NW20DBSpecific" EntityFactoryTypeName="NW20.FactoryClasses.OrderEntityFactory, NW20" LivePers istence="False" OnPerformGetDbCount="orderDS_PerformGetDbCount" OnPerformSelect="orderDS_Perf

Page 284

ormSelect" OnPerformWork="orderDS_PerformWork"> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="CustomerId" PropertyName="SelectedValue" Type="String" /> <asp:ControlParameter ControlID="DropDownList2" Name="ShipCountry" PropertyName="SelectedValue" Type="String" /> </SelectParameters> </llblgenpro:llblgenprodatasource2> </form> </body> </html> Code behind
C# VB.NET

// C# using using using using using using using using using

System; System.Data; System.Configuration; System.Web; System.Web.Security; System.Web.UI; System.Web.UI.WebControls; System.Web.UI.WebControls.WebParts; System.Web.UI.HtmlControls;

using SD.LLBLGen.Pro.ORMSupportClasses; using NW20.DatabaseSpecific; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void customerDS_PerformSelect(object sender, PerformSelectEventArgs2 e) { // fetch all customers using the information passed in via the // PerformSelectEventArgs2 object. This select doesn't have to perform any pagin g, as the // data is for a combo box. using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(e.ContainedCollection, e.Filter, e.MaxNumberOfItemsToReturn, e.Sorter, e.PrefetchPath); } } protected void orderDS_PerformGetDbCount(object sender, PerformGetDbCountEventArgs2 e) { // get the total number of orders which match the filter passed in via the // PerformGetDbCountEventArgs2. using(DataAccessAdapter adapter = new DataAccessAdapter()) { e.DbCount = adapter.GetDbCount(e.ContainedCollection, e.Filter); }

Page 285

} protected void orderDS_PerformSelect(object sender, PerformSelectEventArgs2 e) { // fetch all orders which are in the selected page using the filter passed in // via the PerformSelectEventArgs2 object. using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(e.ContainedCollection, e.Filter, e.MaxNumberOfItemsToReturn, e.Sorter, e.PrefetchPath, e.PageNumber, e.PageSize); } } protected void orderDS_PerformWork(object sender, PerformWorkEventArgs2 e) { // Perform the work passed in via the PerformWorkEventArgs2 object. // Start a new transaction with the passed in unit of work. using(DataAccessAdapter adapter = new DataAccessAdapter()) { // pass the adapter to the Commit routine and tell it to autocommit // when the work is done. e.Uow.Commit(adapter, true); } } } ' VB.NET Imports SD.LLBLGen.Pro.ORMSupportClasses Imports NW20.DatabaseSpecific Partial Class _Default Inherits System.Web.UI.Page Protected Sub customerDS_PerformSelect(ByVal sender As Object, ByVal e As PerformSel ectEventArgs2) _ Handles customerDS.PerformSelect ' fetch all customers using the information passed in via the ' PerformSelectEventArgs2 object. This select doesn't have to perform any paging , as the ' data is for a combo box. Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(e.ContainedCollection, e.Filter, _ e.MaxNumberOfItemsToReturn, e.Sorter, e.PrefetchPath) End Using End Sub Protected Sub orderDS_PerformGetDbCount(ByVal sender As Object, ByVal e As PerformGe tDbCountEventArgs2) _ Handles orderDS.PerformGetDbCount ' get the total number of orders which match the filter passed in via the ' PerformGetDbCountEventArgs2. Using adapter As New DataAccessAdapter() e.DbCount = adapter.GetDbCount(e.ContainedCollection, e.Filter) End Using End Sub Protected Sub orderDS_PerformSelect(ByVal sender As Object, ByVal e As PerformSelect

Page 286

EventArgs2) _ Handles orderDS.PerformSelect ' fetch all orders which are in the selected page using the filter passed in ' via the PerformSelectEventArgs2 object. Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(e.ContainedCollection, e.Filter, _ e.MaxNumberOfItemsToReturn, e.Sorter, e.PrefetchPath, _ e.PageNumber, e.PageSize) End Using End Sub Protected Sub orderDS_PerformWork(ByVal sender As Object, ByVal e As PerformWorkEven tArgs2) _ Handles orderDS.PerformWork ' Perform the work passed in via the PerformWorkEventArgs2 object. ' Start a new transaction with the passed in unit of work. Using adapter As New DataAccessAdapter() ' pass the adapter to the Commit routine and tell it to autocommit ' when the work is done. e.Uow.Commit(adapter, True) End Using End Sub End Class

Setting values for insert/update using bound parameters
One new feature of ASP.NET 2.0 is bound parameters, where you can define parameters which retrieve the values from other controls, cookies, query string etc. One example is given above, using filtering based on the SelectParameters. The LLBLGenProDataSource2 control supports also InsertParameters and UpdateParameters. You can define these parameters for insert (saving a new entity) and update (saving a changed entity) resp. the same way as you do with SelectParameters. This way you can for example set the 'EmployeeId' on a new order entity where you retrieve the EmployeeId from a dropdown control. The value retrieved through a parameter overrules a set value through the bound control. If no value is passed in by the bound control and it's available through the InsertParameters (when inserting) or UpdateParameters (when updating), the value in the Insert/UpdateParameters collection is chosen.

Converting empty string values ("") to NULL values for inserts/updates
In a web-application, form values which are empty are represented as an empty string (""). When an entity is edited through a form, it can be some textboxes or other controls bound to fields of the entity are left empty / point to an empty value: "". The LLBLGenProDataSource2 control will convert "" into NULL for all fields which .NET type isn't the string type. If the field is the string type, this can give a problem: what if the empty string is a valid value? To tell the LLBLGenProDataSource2 control that a field should get the empty string as a valid value instead of NULL, you've to pass a List(of String) object with all the fieldnames of the fields which should accept "" as the valid value to the property FieldNamesKeepEmptyStringAsValue of the LLBLGenProDataSource2 control. You should do this in the code behind of your webform. Example
C# VB.NET

// C# // in your Page Load handler routine if( !Page.IsPostBack ) { List<string> fieldsWhichShouldKeepEmptyString = new List<string>(); fieldsWhichShouldKeepEmptyString.Add( "ShipAddress" ); _ordersDS.FieldNamesKeepEmptyStringAsValue = fieldsWhichShouldKeepEmptyString; }

Page 287

' VB.NET ' in your Page Load handler routine If Not Page.IsPostBack Then Dim fieldsWhichShouldKeepEmptyString As New List(Of String)() fieldsWhichShouldKeepEmptyString.Add( "ShipAddress" ) _ordersDS.FieldNamesKeepEmptyStringAsValue = fieldsWhichShouldKeepEmptyString End If This example tells the LLBLGenProDataSource2 control called '_ordersDS' that the field ShipAddress should get the value "" instead of NULL if the form value is "" for that field. Normally you don't need to set the property FieldNamesKeepEmptyStringAsValue, if "" is not used for string values and NULL is acceptable instead. It can be that the list of names which keep the empty string as the value is actually the complete set of fields of the entity. In that case, you can set the property AllFieldsKeepEmptyStringAsValue to true, which makes the LLBLGenProDataSource2 control to simply not convert empty strings to NULL values for entity fields. Setting this property to true will make LLBLGenProDataSource2 control to ignore FieldNamesKeepEmptyStringAsValue.

The SortingMode property
The LLBLGenProDataSource2 control is capable to apply sorting when data has to be fetched, or for example when you've clicked a column header in a bound GridView control. By default, the LLBLGenProDataSource2 control sorts on the server-side, by producing a SortExpression which is then used by the fetch logic. Which SortExpression is used depends on the value of the property SorterToUse of the LLBLGenProDataSource2 control and the columns specified by the bound control (e.g. clicked column header). It's possible to tell the LLBLGenProDataSource2 control to sort on the client-side instead. Do this by setting the LLBLGenProDataSource2 property SortingMode . By default it's set to ServerSide. If you set it to ClientSide, sorting is applied after the fetch, by sorting the DefaultView object of the datasourcecontrol. Server-side sorting only uses EntityField2 objects, so if the entity has a field which isn't a field mapped onto a table/view field, it's ignored in the server-side sorting actions because it's not part of the query send to the database. This is also true for fields mapped onto related fields. In these situations, use client-side sorting. Be aware that if the CacheLocation is set to None, the LLBLGenProDataSource2 control always has to fetch the data from the database again, as it can't sort cached data in-memory. If you want to avoid the roundtrip to the database, set the CacheLocation to another value than None.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 288

Generated code - Entity collection and Typed List/Typed View paging, Adapter
Preface
Paging is the way to browse through a list of objects or rows of data one page at a time. This can be handy when you have thousands of rows / objects matching search criteria but you want to enlist only a small number at once. With the paging functionality build into the DataAccessAdapter class, you can tell the generated code which page to retrieve for typed lists, typed views or entity collections, instead of getting all the results at once. This section describes the various options you have.

Note : On SqlServer 7 and 2000, paging is implemented using temp tables. This is done to keep one codebase for both SqlServer 7 and SqlServer 2000 and it gives reasonable performance in all situations (small/large resultsets). Paging using ROWCOUNT tricks is not possible due to the fact that this kind of paging is pretty limited when it comes to compound primary keys. On SqlServer 2005, paging is done through a CTE query. Please refer to Generated code - Application configuration through .config files and Generated code - Database specific features how to set the SqlServer DQE into SqlServer 2005 compatibility mode so it will use a CTE based query instead of a temp table based query.

Paging through an entity collection
Paging through an entity collection is implemented in an overload of DataAccessAdapter.FetchEntityCollection(). The particular overload accepts the page size, which is the number of objects to retrieve by the fetch action, and the page number to retrieve. If you for example pass 10 for the page size and 4 for the page number, you'll get record number 31-40, the first record is 1, the first page is also numbered 1. Paging is disabled if you pass 0 for the page number or 0 or 1 for the page size.

Get the total number of objects
To effectively use paging, it is key to know how many pages there are. For example, you want to show a list of page numbers the user can choose from, like Google uses. You can retrieve the number of objects matching your filter by using the DataAccessAdapter's GetDbCount() method. The method below uses an Aggregate function. Read more about aggregate functions and expressions in the section Field expressions and aggregates . The example retrieves the number of different order objects of customers from France.
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# IRelationPredicateBucket filter = new RelationPredicateBucket(); filter.PredicateExpression.Add(CustomerFields.Country == "France"); filter.Relations.Add(OrderEntity.Relations.CustomerEntityUsingCustomerId); DataAccessAdapter adapter = new DataAccessAdapter(); int amount = (int)adapter.GetDbCount(new OrderEntityFactory().CreateFields(), filter, nu ll, false); ' VB.NET .NET 1.x

Page 289

Dim filter As IRelationPredicateBucket = New RelationPredicateBucket() filter.PredicateExpression.Add(New FieldCompareValuePredicate( _ CustomerFields.Country, Nothing, ComparisonOperator.Equal, "France")) filter.Relations.Add(OrderEntity.Relations.CustomerEntityUsingCustomerId) Dim adapter As New DataAccessAdapter() Dim amount As Integer = CInt(adapter.GetDbCount(new OrderEntityFactory().CreateFields(), filter, Nothing, False)) ' VB.NET .NET 2.0 Dim filter As IRelationPredicateBucket = New RelationPredicateBucket() filter.PredicateExpression.Add(CustomerFields.Country = "France") filter.Relations.Add(OrderEntity.Relations.CustomerEntityUsingCustomerId) Dim adapter As New DataAccessAdapter() Dim amount As Integer = CInt(adapter.GetDbCount(new OrderEntityFactory().CreateFields(), filter, Nothing, False)) The value in amount can now be used to calculate the total number of pages when the page size is given: total number of pages = (total number of objects / pagesize) + n, where n is either 0 (total number of objects modulo pagesize is 0) or 1 (total number of objects modulo pagesize > 0). Below is the code to retrieve page 4, with a pagesize of 10 objects. We re-use the filter objects used in the GetScalar() call:
C# VB.NET

// C# EntityCollection orders = new EntityCollection(new OrderEntityFactory()); adapter.FetchEntityCollection(orders, filter, 0, null, 4, 10); ' VB.NET Dim orders As New EntityCollection(New OrderEntityFactory()) adapter.FetchEntityCollection(orders, filter, 0, Nothing, 4, 10) After this call, orders contains 10 objects, which formed the 4th page in the result set matching the filter defined. No sorting is applied here, but if you specify a sort expression, the sorting is performed prior to the paging logic.

Paging through a TypedList or TypedView
The paging functionality is also available for typed list and typed view classes, through an overload of the FetchTypedList() method for typed lists and an overload of FetchTypedView for typed views. For typed lists and typed views, the same definitions are valid as for collections: page numbers start at 1, the first record is numbered 1 and paging is disabled if you pass in a page number of 0 or you pass in a page size of 0 or 1.

Get the total number of rows
Getting the total number of rows for a typed list, is a bit different than it is for a collection. Instead of creating a new RelationPredicateBucket object, you use the RelationPredicateBucket returned from the typed list's method GetRelationInfo(). To that, you add your predicates. For typed views, it works the same as enlisted above for entities. Below is the code to get the number of rows in a typed list with ordercustomer rows.
C# VB.NET

// C# OrderCustomerTypedList orderCustomer = new OrderCustomerTypedList(); IEntityFields2 fields = orderCustomer.GetFieldsInfo(); IRelationPredicateBucket filter = orderCustomer.GetRelationInfo(); filter.PredicateExpression.Add(CustomerFields.Country == "France"); DataAccessAdapter adapter = new DataAccessAdapter(); int amount = (int)adapter.GetDbCount(fields, filter, null, false); ' VB.NET Dim orderCustomer As New OrderCustomerTypedList()

Page 290

Dim fields As IEntityFields2 = orderCustomer.GetFieldsInfo() Dim filter As IRelationPredicateBucket = orderCustomer.GetRelationInfo() filter.PredicateExpression.Add(New FieldCompareValuePredicate( _ CustomerFields.Country, Nothing, ComparisonOperator.Equal, "France")) Dim adapter As New DataAccessAdapter() Dim amount As Integer = CInt(adapter.GetDbCount(fields, filter, Nothing, False)) Fetching a given page in the typed list or typed view is then boiling down to using the FetchTypedList() overload which accepts the two paging parameters for fetching a typed list' page or calling the proper FetchTypedView method for fetching the required page for a typed view. Be sure to clear the typed list/typed view object before calling the fetch methods again to fetch another page.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 291

Generated code - Unit of work and field data versioning, Adapter
Preface
Sometimes actions on entities span a longer timeframe and / or multiple screens. It's then often impossible to start a database transaction as user-interaction during a transaction should be avoided. To track all the changes made and to persist them in one transaction can then be a tedious task. With the UnitOfWork2 class this can be solved. The UnitOfWork2 class lets you collect actions on entities or collections of entities and can perform these actions in one go. The UnitOfWork2 class is serializable which means it can travel accross remoting boundaries. Entities and entity collections added to the UnitOfWork2 class are not aware that they're added to that class, so if you decide not to continue with a given UnitOfWork2 instance you can simply let it get out of scope. UnitOfWork2 objects figure out the order in which actions have to be performed automatically: first Inserts, then Updates and then Deletes. This is controllable, see the section below about Specifying the order in which the actions are executed . LLBLGen Pro also supports versioning of entity fields. This can be handy in multi-page wizards which edit portions of an entity. By utilizing the versioning capability you can rollback to a previous set of field values for a particular entity.

UnitOfWork usage: single entities
Actions collected by the UnitOfWork2 class are not yet performed, but are performed when the UnitOfWork2's Commit() method is called. A UnitOfWork2 class can work with single entities or collections of entities. This paragraph discusses the UnitOfWork2 class with single entities. A UnitOfWork2 class acts like a container for UnitOfWorkElement2 objects which can contain an entity and are defined for a given action: save or delete (update / delete directly are also supported). You can't use the UnitOfWork2 to store select actions. The UnitOfWork2 class works with Add methods for an entity and a given action: AddForSave() or AddForDelete(). You can specify additional parameters for the action: recursive saves and save / delete restriction filters. A delete action for a new entity is ignored. The following example illustrates both methods. First a recursive save is added and after that a delete action. The actions are not executed until Commit() is called. Commit() always expects a DataAccessAdapter object which is used to run the persistent actions. You don't have to start a transaction yourself, if the passed in DataAccessAdapter is not controlling a transaction yet, a new one is started. You can commit more than one UnitOfWork2 object in one transaction, simply pass the same DataAccessAdapter object to all Commit() calls, passing false for autoCommit. Commit() can also autocommit the transaction, if all the actions succeed, you then have to use the overload of Commit() which expects a boolean, autoCommit.
C# VB.NET

// C# CustomerEntity newCustomer = new CustomerEntity(); // ... fill newCustomer's data AddressEntity newAddress = new AddressEntity(); // ... fill newAddress's data newCustomer.VisitingAddress = newAddress; newCustomer.BillingAddress = newAddress; UnitOfWork2 uow = new UnitOfWork2(); // add the customer for a recursive save action and specify true // so the entity is refetched after the save action. uow.AddForSave(newCustomer, true);

Page 292

DataAccessAdapter adapter = new DataAccessAdapter(); ProductEntity productToDelete = new ProductEntity(productID); adapter.FetchEntity(productToDelete); // add this product for deletion. uow.AddForDelete(newProduct); // commit all actions in one go uow.Commit(adapter, true); ' VB.NET Dim newCustomer As New CustomerEntity() ' ... fill newCustomer's data Dim newAddress As New AddressEntity() ' ... fill newAddress's data newCustomer.VisitingAddress = newAddress newCustomer.BillingAddress = newAddress Dim uow As New UnitOfWork2() ' add the customer for a recursive save action and specify true ' so the entity is refetched after the save action. uow.AddForSave(newCustomer, True) Dim productToDelete As New ProductEntity(productID) Dim adapter As New DataAccessAdapter() ' add this product for deletion. uow.AddForDelete(newProduct) ' commit all actions in one go uow.Commit(adapter, True) After the Commit() action, the database has two new entities, the customer and the address, and the product entity is deleted. These actions are taken place inside a new transaction and when Commit() was called, which autocommits the transaction at the end of the actions.

UnitOfWork usage: entity collections
Sometimes a complete collection of entities has to be saved, or deleted. Instead of adding all entities individually (you could of course opt for that), you can add a collection in one go for a given action: save or delete. The following example loads an Order and its OrderDetail entities and deletes the OrderDetail entities while updating the Order entity. The entities in the collection are examined when Commit() is called. This means that an entity which is in the collection when the collection is added to the UnitOfWork2 object and is removed from that collection after that action but before Commit() is called, is not processed by the UnitOfWork2, as the entity is no longer part of the collection being processed.
C# VB.NET

// C# OrderEntity order = new OrderEntity(10254); // load order detail entities through a prefetch Path IPrefetchPath2 prefetchPath = new PrefetchPath2((int)EntityType.OrderEntity); prefetchPath.Add(OrderEntity.PrefetchPathOrderDetails); DataAccessAdapter adapter = new DataAccessAdapter(); adapter.FetchEntity(order, prefetchPath); UnitOfWork2 uow = new UnitOfWork2(); // alter order order.EmployeeID = 3; // add the order for save, no recursion. uow.AddForSave(order); uow.AddCollectionForDelete(order.OrderDetails);

Page 293

// commit all actions in one go uow.Commit(adapter, true); ' VB.NET Dim order As New OrderEntity(10254) ' load order detail entities through a prefetch Path Dim prefetchPath As IPrefetchPath2 = New PrefetchPath2(CType(EntityType.OrderEntity, Int eger)) prefetchPath.Add(OrderEntity.PrefetchPathOrderDetails) Dim adapter As New DataAccessAdapter() adapter.FetchEntity(order, prefetchPath) Dim uow As New UnitOfWork2() ' alter order order.EmployeeID = 3 ' add the order for save, no recursion. uow.AddForSave(order) uow.AddCollectionForDelete(order.OrderDetails) ' commit all actions in one go uow.Commit(adapter, True) When Commit() is called, first all entities in the collection added are added as entities for the Delete action. After that, all actions are executed, first the save action, then the deletes.

UnitOfWork usage: stored procedures
The UnitOfWork2 class is able to collect calls to stored procedures as well, and lets you schedule these calls with the work already added to the UnitOfWork2 class, using four slots. The support for stored procedure calls is done through delegates. This means that you can use this feature also for your own methods, as long as there is a delegate defined for that method. If you want to accept the actual DataAccessAdapter object, you have to make sure the method accepts a DataAccessAdapter object as the last parameter. Adding a stored procedure call can only be done for Action procedure calls. To add a stored procedure call, you'll use the AddCallBack method, which accepts a System.Delegate object, a slot enum value which schedules the call, and zero or more parameters. Below are the slot definitions listed on which you can schedule a stored procedure call. Enum Value PreEntityInsert PreEntityUpdate PreEntityDelete Description Execute the callback before the first entity is inserted. Execute the callback after the last entity has been inserted but before the first entity will be updated. Execute the callback after the last entity has been updated but before the first entity will be deleted.

PostEntityDelete Execute the callback after the last entity has been deleted. LLBLGen Pro generates for each Action procedure call a Delegate definition. Using such a generated delegate definition, you could add a call to a stored procedure using the following code. It adds a call to the ClearTestRunData stored procedure. It specifies that the DataAccessAdapter has to be passed into the procedure so the call will run in the same transaction as the rest of the calls the UnitOfWork object will make. If that's not done, the action procedure will create it's own DataAccessAdapter object. The call is scheduled right before the Delete calls are made on entities.
C# VB.NET

// C# UnitOfWork2 uow = new UnitOfWork2(); uow.AddCallBack(new ActionProcedures.ClearTestRunDataCallBack(ActionProcedures.ClearTest RunData), UnitOfWorkCallBackScheduleSlot.PreEntityDelete, true, _testRunID);

Page 294

' VB.NET Dim uow As New UnitOfWork2() uow.AddCallBack(New ActionProcedures.ClearTestRunDataCallBack(ActionProcedures.ClearTest RunData), _ UnitOfWorkCallBackScheduleSlot.PreEntityDelete, True, _testRunID)

UnitOfWork usage: DeleteEntitiesDirectly and UpdateEntitiesDirectly
Besides adding calls to stored procedures, the UnitOfWork2 object can also accept calls to DeleteEntitiesDirectly and UpdateEntitiesDirectly. You add calls to one of these methods by using one of the overloads of AddDeleteEntitiesDirectlyCall or AddUpdateEntitiesDirectlyCall, by specifying the required parameters for these calls as if you'd make them directly. The calls will be executed on the DataAccessAdapter object used by Commit. The DeleteEntitiesDirectly call will be executed after the entity delete actions but before the PostEntityDelete callbacks. The UpdateEntitiesDirectly call will be executed after the last entity has been updated but before the PreEntityUpdate callbacks.

UnitOfWork usage: Monitoring
In some cases you want to know what a UnitOfWork2 object contains, and what for example the order is in which actions will take place, for example what the real amount of entities is which will be inserted or are inserted. LLBLGen Pro's UnitOfWork2 offers various ways to retrieve this data. For example, the UnitOfWork2 classes offer you to pre-calculate the save queue, by the method ConstructSaveProcessQueues . This method constructs the save queues for insert and update, and is also used by Commit(). After this method call, you can call GetInsertQueue and GetUpdateQueue , to retrieve the exact queue of entities which will be processed during Commit for a save action. Otherwise it would be hard to figure out the exact amount, because an entity which is added using AddForSave() and specified to be saved recursively, could save more entities than itself, due to the recursive save. These save queues are kept after Commit(). So you can also call GetInsertQueue and GetUpdateQueue after Commit() to see what happened during Commit. For deletes, you can use GetEntityElementsToDelete because Deletes are never recursive. UnitOfWork2 also offers methods to retrieve the elements added for insert and update, as well as other elements being added. You can then use the returned collections to for example remove an element from the UnitOfWork2 object. The LLBLGen Pro reference manual will show you all methods and properties available to you for information retrieval. LLBLGen Pro uses ConstructSaveProcessQueues before a UnitOfWork2 object is serialized into a remoting stream, to be sure all elements to send over the wire are indeed elements which participate in a save action. You can switch this off, by setting unitOfWork2. OptimizedSerialization to false.

Specifying the order in which the actions are executed
The UnitOfWork2 class typically executes the actions added to it in the following order: CallBacks for the PreEntityInsert slot, Inserts, CallBacks for the PreEntityUpdate slot, Updates, UpdateEntitiesDirectly calls, CallBacks for the PreEntityDelete slot, Deletes, CallBacks for the PostEntityDelete slot, DeleteEntitiesDirectly calls. This order can be too limited, for example if you first have to delete an entity before a new insert can take place because the entity to insert has the same value for a field with a unique constraint. LLBLGen Pro lets you define entity oriented actions to be ordered in a different order, so the UnitOfWork2 class will for example first execute the deletes and then the updates. This is done by specifying a list of UnitOfWorkBlockType values for the property unitOfWork2 .CommitOrder . By default you don't have to specify any commit order, the UnitOfWork2 class will follow the sequence as specified above. However as soon as you specify a list of UnitOfWorkBlockType values for CommitOrder, it will use that list instead. This means that if you omit a block type, these actions aren't executed at all. Duplicates are filtered out so specifying a blocktype twice has no effect, the second one is ignored. CallBacks with the name Preaction or Postaction belong to the blocktype of action and will be executed in that block, in the same order as described above, so for example PreUpdateEntity callbacks are executed before the updates, when the blocktype for updates is specified to be executed.

Field data versioning
One innovative feature of LLBLGen Pro is its field data versioning. The fields of an entity, say a CustomerEntity, can be versioned and saved under a name inside the entity object itself. Later, you can decide to rollback the entity's field values at a later time. The versioned field data is contained inside the entity and can pass with the entity remoting borders and is saved inside the XML produced by WriteXml(). All fields are versioned at once, you can't version a field's values individually.

Page 295

The following example loads an entity, saves its field values, alters them and then rolls them back, when an exception occurs.
C# VB.NET

// C# CustomerEntity customer = new CustomerEntity("CHOPS"); customer.SaveFields("BeforeUpdate"); try { // show a form to the user which allows the user to // edit the customer entity ShowModifyCustomerForm(customer); } catch { // something went wrong. Entity can be altered. Roll back // fields so further processing won't be affected by these // changes which are not completed customer.RollbackFields("BeforeUpdate"); throw; } ' VB.NET Dim customer As New CustomerEntity("CHOPS") customer.SaveFields("BeforeUpdate") Try ' show a form to the user which allows the user to ' edit the customer entity ShowModifyCustomerForm(customer) Catch ' something went wrong. Entity can be altered. Roll back ' fields so further processing won't be affected by these ' changes which are not completed customer.RollbackFields("BeforeUpdate") Throw End Try
LLBLGen Pro v2.0.0.0 documen tation. ©2002-2006 Solutions Design

Page 296

Generated code - .NET remoting support
Preface
LLBLGen Pro fully support the usage of the generated code and the LLBLGen Pro runtime framework in a distributed scenario with .NET remoting. .NET remoting is often preferred in situations where the client and service are both .NET based and have a tight connection. You've two options: using normal serialization based on the .NET BinaryFormatter logic or use LLBLGen Pro's own fast serialization logic which is simply called FastSerialization. It's required that at both sides the same serialization option is chosen, so if you decide to use FastSerialization, be sure that both the client and the service are using FastSerialization. LLBLGen Pro also supports XML serialization used in webservices and WCF scenarios. If you're looking for more information on XML serialization, please see the section Using the generated code - XML Webservices support To enable normal serialization, you don't have to do anything, it's enabled by default. Simply send the entities over the wire using remoting and you're set. All classes which are usable in a distributed scenario are serializable over remoting, including the exceptions. We didn't make DataAccessAdapter serializable because it contains a live connection and optionally a live transaction, which would propagate 'chatty' interfaces which are a bit cumbersome to control as a DataAccessAdapter instance shouldn't be shared among threads so every client should connect to its own DataAccessAdapter which is quickly leading to a server overload if a lot of clients are connecting to the service.

Enabling FastSerialization
To enable FastSerialization you have to set the static property SerializationHelper.Optimization to one of the values of the enum type SerializationOptimization , which is either None (default, use normal BinaryFormatter serialization) or Fast. After setting this property, LLBLGen Pro will serialize all entities and other objects using FastSerialization. This means that the serialization and deserialization process is much faster and the data block to send over the wire is much smaller. Fast serialization works for Entities, TypedLists, TypedViews and UnitOfWork2 objects. For other objects, normal serialization is used as these tend to be rather small anyway. SerializationHelper has other optimization settings. To avoid having the serialization logic to emit for every entity a GUID (the ObjectID value of an entity) you can switch this off by setting SerializationHelper.PreserveObjectIDs to false. The default is true. It has the side effect that an entity will receive a new GUID when deserialized. This will make using a Context object with these entities useless as the entity isn't detectable as being the same instance. You can also inject your own compressor object. The FastSerialization logic will create a large byte array which is placed under the key "_" in the SerializationContext of the BinaryFormatter. To further compress this byte array, for example with a zip library, you can set the SerializationHelper.Compressor property to an instance of the IByteCompressor interface, which is a simple interface to compress and decompress blocks of bytes. LLBLGen Pro doesn't provide a basic implementation of this interface however as the interface is very simple, it's straight forward. See for more details about this interface and the SerializationHelper class the LLBLGen Pro reference manual.

Serializing / Deserializing custom entity data
If you've added additional members to an entity class and you want to serialize the data in these members into the binary output as well as deserialize them at the other side, you have to add some code to make that happen. Below are the places described where to place which code to serialize / deserialize your own

Page 297

data with LLBLGen Pro's remoting logic. As an example, the value of a custom string member added to the entity class OrderEntity, _orderTotal (decimal), is serialized and deserialized.

Normal serialization / deserialization
To serialize the custom member _orderTotal in the OrderEntity class, and also to deserialize it properly, create a partial class of OrderEntity and add the following code to that partial class. (You can also add the code to the user code region at the bottom of OrderEntity if you want to). You don't need to add code for entity fields and entity collections inside entities, these are serialized automatically.
C# VB.NET

// C# protected override void OnGetObjectData(SerializationInfo info, StreamingContext context ) { info.Add("_orderTotal", _orderTotal); } ' VB.NET Protected Overrides Sub OnGetObjectData(info As SerializationInfo, context As StreamingC ontext) info.Add("_orderTotal", _orderTotal) End Sub To deserialize the _orderTotal value again, add the following code to the partial class.
C# VB.NET

// C# protected override void OnDeserialized(SerializationInfo info, StreamingContext context) { _orderTotal = info.GetDecimal("_orderTotal"); } ' VB.NET Protected Overrides Sub OnDeserialized(info As SerializationInfo, context As StreamingCo ntext) _orderTotal = info.GetDecimal("_orderTotal") End Sub

FastSerialization
FastSerialization doesn't use the BinaryFormatter routines of ISerializable, it uses its own graph traversal routines and therefore to be able to serialize / deserialize your own custom member variables, you can't use the OnGetObjectData and OnDeserialized methods. Instead you've to use the following two routines, again, following the same example as above and these also should be added to a partial class of OrderEntity (or to the user code regions at the bottom of the OrderEntity class).

Note : Make sure that the serialization order and deserialization order are the same.

To serialize the _orderTotal value into the stream, use the following code:
C# VB.NET

Page 298

// C# protected override void SerializeOwnedData(SerializationWriter writer, object context) { base.SerializeOwnedData(writer, context); writer.WriteOptimized(_orderTotal); } ' VB.NET Protected Overrides Sub SerializeOwnedData(writer As SerializationWriter, context As obj ect) MyBase.SerializeOwnedData(writer, context) writer.WriteOptimized(_orderTotal) End Sub To deserialize the serialized value, use the following code:
C# VB.NET

// C# protected override void DeserializeOwnedData(SerializationReader reader, object context) { base.DeserializeOwnedData(reader, context); _orderDecimal = reader.ReadOptimizedDecimal(); } ' VB.NET Protected Overrides Sub DeserializeOwnedData(reader As SerializationReader, context As o bject) MyBase.DeserializeOwnedData(reader, context) _orderDecimal = reader.ReadOptimizedDecimal() End Sub

Serialize / Deserialize RemovedEntitiesTracker with FastSerialization
A RemovedEntitiesTracker collection inside an entity collection isn't serialized into the remoting stream by default. This is because it's recommended to use a UnitOfWork2 instance and if an entity collection with a RemovedEntitiesTracker is added to the UnitOfWork2 instance, as well as the RemovedEntitiesTracker collection, it would be redundant to serialize the same collection twice. To serialize the RemovedEntitiesTracker inside an entity collection without a UnitOfWork2 class, you've to add some lines of code to make this work. First you've to create a partial class of EntityCollection(Of T) in the generated code. The EntityCollection(Of T) class is in the HelperClasses namespace in the database generic project. Creating a new partial class of the EntityCollection(Of T) class extends the existing generated EntityCollection(Of T) class. In your partial class of EntityCollection(Of T), add the code as illustrated below. After that you can serialize over FastSerialization enabled remoting an entity collection with embedded inside itself the RemovedEntitiesTracker collection
C# VB.NET

public partial class EntityCollection<TEntity> : EntityCollectionBase2<TEntity> where TEntity : EntityBase2, IEntity2 { /// <summary> /// Method which restores owned data - i.e. considered private to this collection /// and not shared with any external object /// </summary> /// <param name="writer">SerializationWriter</param> /// <param name="context">The serialization flags (previously constructed)</param>

Page 299

protected override void SerializeOwnedData(SerializationWriter writer, object contex t) { base.SerializeOwnedData(writer, context); byte[] trackerData = new byte[0]; if((this.RemovedEntitiesTracker != null) && (this.RemovedEntitiesTracker.Count>0 )) { // serialize tracker FastSerializer serializer = new FastSerializer(); trackerData = serializer.Serialize(this.RemovedEntitiesTracker).ToArray(); } writer.Write(trackerData); } /// /// /// /// /// am> /// <param name="context">The serialization flags (previously read)</param> protected override void DeserializeOwnedData(SerializationReader reader, object cont ext) { base.DeserializeOwnedData(reader, context); byte[] trackerData = reader.ReadByteArray(); if(trackerData.Length > 0) { // tracker data read, deserialize it to a real tracker collection EntityCollection<TEntity> trackerCollection = new EntityCollection<TEntity>( ); FastDeserializer deserializer = new FastDeserializer(); deserializer.Deserialize(trackerData, trackerCollection); this.RemovedEntitiesTracker = trackerCollection; } } } Public Partial Class EntityCollection(Of TEntity As {EntityBase2, IEntity2}) Inherits EntityCollectionBase2(Of TEntity) ''' <summary> ''' Method which restores owned data - i.e. considered private to this collection ''' and not shared with any external object ''' </summary> ''' <param name="writer">SerializationWriter</param> ''' <param name="context">The serialization flags (previously constructed)</param> Protected Overrides Sub SerializeOwnedData(writer As SerializationWriter, context As Object) MyBase.SerializeOwnedData(writer, context) Dim trackerData() As Byte If (Not (this.RemovedEntitiesTracker Is Nothing) AndAlso (Me.RemovedEntitiesTrac ker.Count>0)) Then ' serialize tracker Dim serializer As New FastSerializer() trackerData = serializer.Serialize(Me.RemovedEntitiesTracker).ToArray() End If writer.Write(trackerData) End Sub <summary> Method which restores owned data - i.e. considered private to this entity and not shared with any external object </summary> <param name="reader">The SerializationReader containing the serialized data</par

Page 300

''' ''' ''' ''' ''' am>

<summary> Method which restores owned data - i.e. considered private to this entity and not shared with any external object </summary> <param name="reader">The SerializationReader containing the serialized data</par

''' <param name="context">The serialization flags (previously read)</param> Protected Overrides Sub DeserializeOwnedData(reader As SerializationReader, context As object) MyBase.DeserializeOwnedData(reader, context) Dim trackerData() As Byte = reader.ReadByteArray() If trackerData.Length > 0 Then ' tracker data read, deserialize it to a real tracker collection Dim trackerCollection As New EntityCollection(Of TEntity)() Dim deserializer As New FastDeserializer() deserializer.Deserialize(trackerData, trackerCollection) Me.RemovedEntitiesTracker = trackerCollection End If End Sub End Class
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 301

Generated code - XML Webservices / WCF support
Preface
One hot buzzword of the last couple of years has to be 'webservices', which is short for XML Webservices, i.e.: XML message based services offered through a normal HTTP based website. As webservices are XML based, the .NET framework uses XmlSerializer to produce XML which reflects the objects returned from a webmethod and to produce instances of classes from the XML received, for example at the client. This section describes how to use the entity and entity collection classes of the generated code in a webservice scenario. Every entity and entity collection class implements IXmlSerializable, which makes it possible to use these classes with webmethods transparently, without the necessity to first produce XML from them using their WriteXml() methods.

Note : Please pay attention to the caveats section at the bottom, to be fully aware of the consequences and to avoid compilation errors when using the generated code in a webservice scenario.

Note : Generics aren't supported in webservices nor are polymorphic fetches using single entity instances. This means that the type returned by the webmethod is the type of object you will get on the client. If you return an instance of a derived type of the webmethod's returntype (e.g. you return a ManagerEntity while the method's returntype is EmployeeEntity, you'll get an EmployeeEntity object at the client, not a ManagerEntity object. Polymorphic fetches using EntityCollection objects are supported. Instead of returning entities and entity collections, you could consider to use Data Transfer Objects (DTO's) or small message objects, which are filled using LLBLGen Pro's projection functionality (see: Generated code - Fetching DataReaders and projections, SelfServicing or Generated code - Fetching DataReaders and projections, Adapter ) and then used with the XmlSerializer. Although the XmlSerializer is limited, it can be more efficient if your objects are simple and don't contain interface typed members nor cyclic references. .NET 3.0 introduces WCF, or Windows Communication Foundation. It's a new framework which replaces the .NET 1.x/2.0 way of doing webservices and remoting. LLBLGen Pro entities are fully usable with WCF similar to webservices. As entity classes are generated for you, using a DataContract isn't supported, but that also wouldn't be that helpful, given the fact that an entity could change while a data contract isn't changeable. At the end of this section, a special WCF paragraph describes how to get started with LLBLGen Pro and WCF illustrating a simple service in code. For web services and WCF, which both use the IXmlSerializable implementations of the entity and entity collection classes, LLBLGen Pro uses its own Compact25 format, which is very lightweight and very fast to consume and produce as it contains almost no overhead. See for a description of the Compact25 format: generated code - Xml support .

Example usage
The example discussed here is pretty simple. It offers a service with three methods: GetCustomer, SaveCustomer and GetCustomers. The client consumes the service to retrieve a customer to have it edited in a winforms application and saves the changed data back into the database using the webservice, plus it uses the service to display all customers available.

Page 302

The service project has references to both generated projects: the database generic and the database specific project. The client only has a reference to the database generic project, as it uses the service for database specific activity, namely the persistence logic to work with the actual data. Because both client and service have references to the database generic project, they both can use the same types for the entities, in this case the CustomerEntity.

The service
Below is the service code, simplified. As you can see, the code works directly with entity objects and entity collection objects.
C# VB.NET

// C# [WebService(Namespace="http://www.llblgen.com/examples")] public class CustomerService : System.Web.Services.WebService { [WebMethod] public CustomerEntity GetCustomer(string customerID) { CustomerEntity toReturn = new CustomerEntity(customerID); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntity(toReturn); return toReturn; } } [WebMethod] public EntityCollection GetCustomers() { EntityCollection customers = new EntityCollection(new CustomerEntityFactory()); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(customers, null); return customers; } } [WebMethod] public bool SaveCustomer(CustomerEntity toSave) { using(DataAccessAdapter adapter = new DataAccessAdapter()) { return adapter.SaveEntity(toSave); } } } ' VB.NET <WebService(Namespace="http://www.llblgen.com/examples")> _ Public Class CustomerService Inherits System.Web.Services.WebService <WebMethod> _ Public Function GetCustomer(customerID As String) As CustomerEntity Dim toReturn As New CustomerEntity(customerID)

Page 303

Dim adapter As New DataAccessAdapter() Try adapter.FetchEntity(toReturn) Return toReturn Finally adapter.Dispose() End Try End Function <WebMethod> _ Public Function GetCustomers() As EntityCollection Dim customers As New EntityCollection(New CustomerEntityFactory()) Dim adapter As New DataAccessAdapter() Try adapter.FetchEntityCollection(customers, Nothing) Return customers Finally adapter.Dispose() End Try End Function <WebMethod> _ Public Function SaveCustomer(toSave As CustomerEntity) As Boolean Dim adapter As New DataAccessAdapter() Try Return adapter.SaveEntity(toSave) Finally adapter.Dispose() End Try End Function End Class This code forms the code behind of the .asmx file which forms the service entry point.

The client
VS.NET uses wsdl.exe to generate so called proxy classes. These classes are required to communicate with the webservices and in fact represent the webservice on the client. When using VS.NET 2002/2003, the wsdl.exe executable produces correct code with one exception: every class which is exposed through a webmethod (either as return parameter or as method parameter) and which implements IXmlSerializable (which is the case with entity and entity collection classes) is considered to be a DataSet. This of course isn't correct. See the next section .NET 1.x specific: Caveats using wsdl.exe .NET 1.x/VS.NET 2002/3 and the entity classes below for more details on this. If you're targeting .NET 2.0 specifically (so no .NET 3.0/WCF) and you're using VS.NET 2005, you can generate extra classes in a separate project which will help VS.NET and wsdl.exe to produce proper proxy classes with normal entity types and not as DataSets. See the section .NET 2.0 specific: Schema importers below for more details about how to use this separate project to get proper proxy classes. We start by first adding a webreference to the webservice. This triggers VS.NET to put wsdl.exe at work to generate the proxy classes. We then have to manually adjust these classes to correct the types. After we've fixed this issue, we can write some client code which utilizes the webservice we just created. Below are the three methods which illustrate a way to use the webservice. Only the relevant code is posted here, but it's enough to illustrate the point and to get you started with consuming the entities with your own webservices. No error recovery is implemented, just the minimal code to get the service used.
C# VB.NET

// C# // Method which is called after the GetCustomer button is clicked. private void _getCustomerButton_Click(object sender, System.EventArgs e) {

Page 304

// Zeus.CustomerService is the generated proxy for the service. Zeus.CustomerService service = new Zeus.CustomerService(); // grab the textbox contents and pass it to the service to retrieve the customer entity CustomerEntity c = (CustomerEntity)service.GetCustomer(_customerIDTextBox.Text); // as we're displaying the customer in a grid, we have to wrap the customer object in // an entity collection, as grids only bind to collections. EntityCollection col = new EntityCollection(); col.Add(c); _mainGrid.DataSource = col; } // Method which is called after the GetAll button is clicked. private void _getAllButton_Click(object sender, System.EventArgs e) { // Zeus.CustomerService is the generated proxy for the service. Zeus.CustomerService service = new Zeus.CustomerService(); // simply pull all customers from the service into an entity collection. EntityCollection customers = (EntityCollection)service.GetCustomers(); // bind the collection to the grid. _mainGrid.DataSource = customers; } // Method which is called after the SaveCustomer button is clicked. // This button works together with the GetCustomer button and assumes // a customer is loaded in the grid using GetCustomer. private void _saveCustomer_Click(object sender, System.EventArgs e) { // Zeus.CustomerService is the generated proxy for the service. Zeus.CustomerService service = new Zeus.CustomerService(); // Get the collection currently bound to the grid, which is the wrapper // around the single customer object received earlier. EntityCollection customers = (EntityCollection)_mainGrid.DataSource; // The customer object in the collection is send to the service. // Inside the object, changed information is stored so the persistence logic // at the service will be able to save the data. bool saveResult = service.SaveCustomer((CustomerEntity)customers[0]); MessageBox.Show("Save result = " + saveResult); } ' VB.NET ' Method which is called after the GetCustomer button is clicked. Private Sub _getCustomerButton_Click(sender As Object, e As System.EventArgs) ' Zeus.CustomerService is the generated proxy for the service. Dim service As New Zeus.CustomerService() ' grab the textbox contents and pass it to the service to retrieve the customer entity Dim c As CustomerEntity = CType(service.GetCustomer(_customerIDTextBox.Text), CustomerE ntity) ' as we're displaying the customer in a grid, we have to wrap the customer object in ' an entity collection, as grids only bind to collections. Dim col As New EntityCollection() col.Add(c) _mainGrid.DataSource = col End Sub ' Method which is called after the GetAll button is clicked. Private Sub _getAllButton_Click(sender As Object, e As System.EventArgs) ' Zeus.CustomerService is the generated proxy for the service. Dim service As New Zeus.CustomerService()

Page 305

' simply pull all customers from the service into an entity collection. Dim customers As EntityCollection = CType(service.GetCustomers(), EntityCollection) ' bind the collection to the grid. _mainGrid.DataSource = customers End Sub ' Method which is called after the SaveCustomer button is clicked. ' This button works together with the GetCustomer button and assumes ' a customer is loaded in the grid using GetCustomer. Private Sub _saveCustomer_Click(sender As Object, e As System.EventArgs) ' Zeus.CustomerService is the generated proxy for the service. Dim service As New Zeus.CustomerService() ' Get the collection currently bound to the grid, which is the wrapper ' around the single customer object received earlier. Dim customers As EntityCollection = CType(_mainGrid.DataSource, EntityCollection) ' The customer object in the collection is send to the service. ' Inside the object, changed information is stored so the persistence logic ' at the service will be able to save the data. Dim saveResult As Boolean = service.SaveCustomer((CustomerEntity)customers(0)) MessageBox.Show("Save result = " & saveResult) End Sub

Custom Member serialization/deserialization
If you add your own member variables to entity classes including their properties, you probably also want these values to be serialized and deserialized into the XML stream. Normally, a custom member exposed as a read/write property is serialized as a string using the ToString() method of the value of the custom property. In a lot of cases this isn't sufficient and you want to perform your own custom xml serialization/deserialization on the value of this custom property, for example if this custom property represents a complex object. To signal that the LLBLGen Pro runtime framework has to call custom xml serialization code for a given property, the property has to be annotated with a CustomXmlSerializationAttribute attribute. When a property is seen with that attribute, LLBLGen Pro will call the entity method entity. PerformCustomXmlSerialization to produce valid XML for the custom property. Likewise, when deserializing an XML stream into entities, the LLBLGen Pro runtime framework will call, when it runs into a property annotated with a CustomXmlSerializationAttribute, the method entity .PerformCustomXmlDeserialization to deserialize the xml for the property into a valid object. You should override these methods in a partial class of the entity which contains the custom properties. Custom property serialization/deserialization is a feature of the Compact25 xml format, which is used by Adapter in Webservices/WCF scenarios.

.NET 1.x specific: Caveats using wsdl.exe .NET 1.x/VS.NET 2002/3 and the entity classes
It's already been discussed briefly in the previous chapter: there is a caveat when using the generated entity classes and entity collection classes in a webservice scenario when using .NET 1.x and VS.NET 2002/2003. The wsdl.exe tool, which is used by vs.net, generates proxy classes to work with a webservice. If this service returns or uses entity or entity collection classes, the wsdl.exe tool will think they're DataSets and will generate code which use the DataSet instead of the typed entity classes, which don't derive from the DataSet also. This will give non-compilable code on the client. In .NET 2.0 this is different and LLBLGen Pro v2.0 has support code build in to avoid this for VS.NET 2005/.NET 2.0. See the next section .NET 2.0 specific: Schema importers for details about the situation in .NET 2.0 / VS.NET 2005. With the default generated proxy classes in .NET 1.x / VS.NET 2002/2003, the developer writing the client code has to do the following to fix this issue. Be aware of the fact that this code is re-freshed every time the webreference is refreshed in the client VS.NET project. If this is unacceptable for you, please either return/use DataSets for communication between client and server, use WriteXml/ReadXml to produce Xml to use with communication between client and server or use remoting. In VS.NET, click open the Web References node and in the solution explorer, click the button to show all files. Click open Reference.map node and open the file Reference.cs/vb

Page 306

Reference.cs/vb is the generated stub/proxy class for the service. First add at the top of the file the using/Imports statements which reference the EntityClasses and HelperClasses namespaces. Next, alter any System.Data.DataSet reference and change it to the right type returned from the webservice / used in the methods of the webservice The code is now ready to be compiled and used. Be aware of the fact that with a huge service, it's best to keep the generated proxy code in a safe place so when the service is refreshed, you don't have to alter all the types again.

.NET 2.0 specific: Schema importers
In .NET 2.0, Microsoft has updated the wsdl.exe tool and the WSDL format slightly so it's now possible to direct the wsdl.exe tool to generate proxy classes which use the actual types returned by the webmethods instead of DataSets when the return-types of the methods implement IXmlSerializable. The mechanism isn't easy however, because Microsoft implemented it in a very complex way. This means that you've to jump through various hoops to get the proxy classes generated like you want them to be.

Note : If you're using .NET 3.0 or higher and WCF, you don't need to use Schema importers, as you can use a ServiceContract. Please see the WCF specific section below for more details. To begin, first enable in the adapter preset for .NET 2.0 you're using the tasks under the SD.Tasks.Adapter.Webservices.SchemaImporter group header. Three tasks are in that task group, all disabled by default. Furthermore, enable the task SD.Tasks.Adapter.Webservices.WebserviceHelperClassGenerator as well. To re-use the preset later on, you should save the preset under a different name. When you generate code using these classes enabled you'll get an extra VS.NET project generated, SchemaImporter.vb/csproj. It will contain a single class, SchemaImporter. Furthermore, in the dbgeneric project, in the HelperClasses namespace a class called WebServiceHelper.vb/cs is generated. The SchemaImporter project works on the client, in conjunction with the XmlSchema data produced by the service, which is enhanced with extra type info.

Getting it all up and running
Follow the next steps to get proper proxy classes with the generated code. The SchemaImporter project has to be signed with a strong name and has to be placed in the GAC. The machine.config file of the developer's machine has to be adjusted as well. This thus means that the service running system doesn't have to be altered at all.

1. 2. 3. 4. 5.

6.

7.

8.

9. 10.

Go the the SchemaImporter project. In the menu, choose 'Project' and then 'SchemaImporter Properties'. Go to the 'Application' tab and press the 'Assembly Information' button. Fill in at least the assembly version, make it version 1.0.0.0 Go to the 'Signing' tab a. Activate the 'Sign the assembly' checkbox b. In the combobox, choose 'New...' c. In the popup window, enter the name for your strong name key and a password, if required. d. Press OK. You should now see the key included in your project. Compile your SchemaImporter project. Before you compile, you might want to change the name in your project properties, under the 'Application' tab, in the 'Assembly Name' textbox, so it'll be recognizable. We'll now update the machine.config file on the developer's machine . To do that properly, we need some information about the key used for signing the schemaimporter assembly. Open the Visual Studio 2005 command prompt and go to the project folder of the SchemaImport project. Now enter the following commands: a. sn -p schemaimporter.snk snpub.snk b. sn -t snpub.snk The public key token is shown, something like ab123456c78de9f0 The XML snippet below will be copied into the machine.config file in a moment. Replace the public key token in the xml below, with the just shown public key token.

Page 307

b. 9. 10.

<system.xml.serialization> <schemaImporterExtensions> <add name="SchemaImporter" type="yournamespace.SchemaImporter, assemblyname, Ver sion=1.0.0.0, Culture=neutral, PublicKeyToken=ab123456c78de9f0" /> </schemaImporterExtensions> </system.xml.serialization>

11. Now go the the following folder: C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG. This is on the client developer's machine . You don't need to alter any data on the webservice machine. 12. Open the machine.config file with notepad. a. Find the following tag : </configSections> b. The xml shown in step 10 should be copied after this tag, as it closes the configuration settings, which specifies our System.Xml.Serialization section. Make sure your public key token is in it. c. Alter the type attribute to your own namespace and assemblyname. In your SchemaImporter project you can find the namespace and assemblyname. The assemblyname is in your project properties under the 'Application' tab. The namespace is in the SchemaImporter.cs class file. It should be something like: type="[rootnamespace].SchemaImporter.EntityClassesSchemaImporter, SchemaImporter " Example: if you've set your rootnamespace to MyCompany.CoolApp then it should be: type="MyCompany.CoolApp.SchemaImporter.EntityClassesSchemaImporter, SchemaImporter" d. Save the file We're now going to register our assembly in the Global Assembly Cache (GAC) manually. We can also do this automated through the build properties and a dos command. If you want to do that, skip the rest of this tutorial and go to the next bullet list. First, open up the control panel, administrative tools and than choose Microsoft .NET Framework 2.0 Configuration . Under 'My Computer' go to the Assembly Cache and choose Add an Assembly to the Assembly Cache . Go to your project folder, to the /bin/debug/ folder and add your SchemaImporter assembly (the .dll file).

13.

14. 15. 16.

Now everything should work. We can also register the SchemaImporter every time we build a new version of the solution. To do that, follow the next steps: 1. Go to the project properties of your SchemaImporter, and select the 'Build Events' tab. 2. Enter the following line in the post-build events box "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" -i "$(TargetPath)"

After these long list of steps, you should be able to add a webreference to your webservice which returns and consumes entities from the generated code which you generated with the preset which has the WebServiceHelper generator task and the SchemaImporter generator tasks enabled. Once the proxy is generated, you can see if it worked by going to the web reference and choose to see all files. Take a look at the reference map and the file under it, Reference.cs. That's the generated proxy. (Special thanks to Dennis v/d Stelt and Ryan Hurd).

.NET 3.0+ specific: Windows Communication Foundation (WCF) support
In .NET 3.0 and up, you don't need schema importers anymore, as the preferred technique to use is Windows Communication Foundation (WCF). WCF takes care of a lot of plumbing and configuration so it's easier to write service oriented software. To be able to send entities from service to client and back using WCF, you have to define a ServiceContract . This ServiceContract defines the types involved in the service. It's recommended to define an interface onto which the ServiceContract is defined. Both client and service now know which types are involved in the service and no stub classes are created anymore nor necessary. If you want to have a fixed DataContract instead, you shouldn't use Entity classes but you should send Data Transfer Objects (DTO)'s which are more or less dumb buckets with data back and forth. The reason is that

Page 308

a DataContract can't change however an entity might change over time, which then would violate the DataContract. LLBLGen Pro's powerful projection framework can help you with projecting fetched data onto DTO classes to send them over the wire. Below is a small example of a simple WCF service and client. Its main purpose is to illustrate what to do to get LLBLGen Pro generated code working with WCF. You should check the MSDN library for information about WCF, configuration of WCF services and other WCF documentation to get a WCF service up and running in your environment.

Interface with ServiceContract
Below is the service interface definition with the ServiceContract. The client code will use this interface to refer to the service and the service will use this interface to implement a common interface for clients to connect to.
C# VB.NET

// C# [ServiceContract] [ServiceKnownType(typeof(CustomerEntity))] [ServiceKnownType(typeof(EntityCollection))] public interface IWCFExample { [OperationContract] IEntity2 GetCustomer(string customerID); [OperationContract] IEntityCollection2 GetCustomers(); } ' VB.NET <ServiceContract(), _ ServiceKnownType(GetType(CustomerEntity)), _ ServiceKnownType(GetType(EntityCollection))> _ Public Interface IWCFExample <OperationContract()> _ Function GetCustomer(customerID As String) As IEntity2 <OperationContract(&)gt; _ Function GetCustomers() As IEntityCollection2 End Interface

Server implementation
Below is the implementation of the IWCFExample interface to be used as a WCF service.
C# VB.NET

// C# // class to implement the service logic [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class WCFExampleService : IWCFExample { public IEntity2 GetCustomer(string customerID) { CustomerEntity toReturn = new CustomerEntity(customerID); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntity(toReturn); }

Page 309

return toReturn; } public IEntityCollection2 GetCustomers() { EntityCollection toReturn = new EntityCollection(new CustomerEntityFactory()); using(DataAccessAdapter adapter = new DataAccessAdapter()) { adapter.FetchEntityCollection(toReturn, null); } return toReturn; } } // class to actually run the service: public class WCFExampleServerHost { public WCFExampleServerHost() { WCFExampleService server = new WCFExampleService(); ServiceHost host = new ServiceHost(server); host.Open(); } } ' VB.NET ' class to implement the service logic <ServiceBehavior(InstanceContextMode := InstanceContextMode.Single)> _ Public Class WCFExampleService Implements IWCFExample Public Function GetCustomer(customerID As string) As IEntity2 Implements IWCFExample .GetCustomer Dim toReturn As new CustomerEntity(customerID) Using adapter As New DataAccessAdapter() adapter.FetchEntity(toReturn) End Using Return toReturn End Function Public Function GetCustomers() As IEntityCollection2 Implements IWCFExample.GetCusto mers Dim toReturn As New EntityCollection(New CustomerEntityFactory()) Using adapter As New DataAccessAdapter() adapter.FetchEntityCollection(toReturn, Nothing) End Using Return toReturn End Function End Class ' class to actually run the service: Public Class WCFExampleServerHost Public Sub New() Dim server As New WCFExampleService() Dim host As New ServiceHost(server) host.Open() End Sub End Class

Page 310

Client usage of service
Below is the code snippet to consume the service defined above. It illustrates the usage of the service.
C# VB.NET

// C# ChannelFactory<IWCFExample> channelFactory = new ChannelFactory<IWCFExample>("WCFExampleServer"); IWCFExample server = channelFactory.CreateChannel(); // Fetch an entity IEntity2 c = server.GetCustomer("CHOPS"); // Fetch a collection IEntityCollection2 customers = serverTest.GetCustomers(); ' VB.NET Dim channelFactory As New ChannelFactory(Of IWCFExample)("WCFExampleServer") Dim server As IWCFExample = channelFactory.CreateChannel() ' Fetch an entity Dim c As IEntity2 = server.GetCustomer("CHOPS") ' Fetch a collection Dim customers As IEntityCollection2 = serverTest.GetCustomers()

Configuration of the service
Below is the serviceModel element of the service config file. The settings below are illustrative and your own production service likely will use different values for various WCF settings. Please consult the WCF documentation in the MSDN library for details on the user elements. <system.serviceModel> <bindings> <netTcpBinding> <binding name="RemoteConfig" closeTimeout="infinite" openTimeout="infinite" sendTimeout="infinite" receiveTimeout="infinite" maxBufferSize="65536000" maxReceivedMessageSize="65536000" /> </netTcpBinding> </bindings> <services> <service name="Service.WCFExampleServer"> <endpoint address="" binding="netTcpBinding" name="WCFExampleServer" bindingConfiguration="RemoteConfig" contract="Interfaces.IWCFExample" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:6543/WCFExampleServer" /> </baseAddresses> </host> </service> </services> </system.serviceModel>

Page 311

Configuration of the client
Below is the serviceModel element of the client config file. The settings below are illustrative and your own production service likely will use different values for various WCF settings. Please consult the WCF documentation in the MSDN library for details on the user elements. <system.serviceModel> <bindings> <netTcpBinding> <binding name="RemoteConfig" closeTimeout="infinite" openTimeout="infinite" sendTimeout="infinite" receiveTimeout="infinite" maxBufferSize="65536000" maxReceivedMessageSize="65536000" /> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:6543/WCFExampleServer" name="WCFServer" binding="netTcpBinding" bindingConfiguration="RemoteConfig" contract="Interfaces.IWCFExample" /> </client> </system.serviceModel>
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 312

Using the generated code, SelfServicing
Preface
This section describes various aspects of the generated code specific to the SelfServicing template group. You can generate code using SelfServicing templates by choosing the 'SelfServicing ' template group in the generator configuration dialog. See: Designer - Generating code SelfServicing will generate one Visual Studio.NET project in a single directory. The code is not compatible with Adapter code. You can however use both in the same application and you can share validation classes.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 313

Generated code - DbUtils functionality
Preface
Every SelfServicing generated set classes has one class which controls a couple of global settings for the application: DbUtils.cs/vb, located in the HelperClasses namespace. DbUtils is also the class used by the DAO classes to produce database specific objects, like ADO.NET connections and command objects, and perform low level stored procedure execution. Below are the options it offers as static (shared) options.

Connection strings
The DbUtils class lets you set the global connection string to use for every connection to the database. This setting overrides the connection string read from the appSettings section in the .config file. Once the setting is set, every connection to the database uses the set connection string. You set the connection string to use at runtime using the following code:
C# VB.NET

// C# DbUtils.ActualConnectionString = "Datasource=myserver;...."; ' VB.NET DbUtils.ActualConnectionString = "Datasource=myserver;...." If you want to make the application use the connection string defined in the config file again, simply set the ActualConnectionString property to string.Empty.

Command timeouts
If you want to set the ADO.NET command timeout to a value other than the default of 30 seconds, use the DbUtils.CommandTimeOut property to set it to a different value. This will change the timeout immediately for all calls to the database.

Note: Firebird doesn't support command timeouts.

ArithAbort flag (SqlServer only)
If an entity is saved into a table which is part of an indexed view, SqlServer requires that SET ARITHABORT ON is specified prior to the actual save action. You can tell LLBLGen Pro to set that option, by calling the global method DbUtils.SetArithAbortFlag(bool) method. After each SQL statement a SET ARITHABORT OFF statement will be executed if the ArithAbort flag is set to true. Setting this flag affects all INSERT statements following the call to SetArithAbortFlag(), until you call that method again.

DQE Compatibility mode(SqlServer only)
With the arrival of SqlServer 2005 and its new features, it was required to make the SqlServer DQE be configurable so it could generate SQL which was optimal for the database type used. To set the compatibility mode of the SqlServer DQE in code, you can use the DbUtils method SetSqlServerCompatibilityLevel, as shown in the following example which sets the compatibility mode to SqlServer 2005:

Page 314

C# VB.NET

// C# DbUtils.SetSqlServerCompatibilityLevel( SqlServerCompatibilityLevel.SqlServer2005 ); ' VB.NET DbUtils.SetSqlServerCompatibilityLevel( SqlServerCompatibilityLevel.SqlServer2005 ) The different compatibility modes are: SqlServerCompatibilityLevel.SqlServer7 (or the value 0) SqlServerCompatibilityLevel.SqlServer2000 (or the value 1) SqlServerCompatibilityLevel.SqlServer2005 (or the value 2) The default is SqlServer2000. The values 0, 1 or 2 have to be used when you're using the .config file parameter. See for more details about that parameter Generated code - Application configuration through .config files. Setting the compatibility level controls the sequence retrieval logic to use by default (@@IDENTITY on Sqlserver 7 or SCOPE_IDENTITY() on 2000/2005), the ability to use NEWSEQUENTIALID() (SqlServer 2005) and the SQL produced for a paging query: a temptable approach is used on SqlServer 7 or 2000, and a CTE approach is used on SqlServer 2005.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 315

Generated code - Using the entity classes, SelfServicing
Preface
When you generate code and you opt for the TwoClasses preset in combination of the SelfServicing template group you'll notice that there are actually two classes (hence the two class scenario) per entity which are used in a 'base class' - 'derived class' way. (When you use the General preset, there is just one class, EntityName Entity.cs/vb, which contains the same functionality as the EntityName EntityBase.cs/vb class generated in the TwoClasses situation). This section describes the TwoClasses situation and how to use the base class and derived class for every entity in your code, though the topics addressed can easily be applied to the generated code produced when using the General preset as well. All entity classes derive from a central, generated base class called CommonEntityBase . This class is the base class for all generated entity classes and it derives from the class EntityBase , which is located in the ORMSupportClasses assembly. The CommonEntityBase class is usable to add code (via a partial class or using the user code regions) to all generated entities without having to generate / add that code to all entity classes separately. The section below is the same for entities in an inheritance hierarchy as for entities not in an inheritance hierarchy, unless stated otherwise.

Two classes
The base class, generally named EntityName EntityBase.cs/vb, for example OrderEntityBase.cs, is the class containing all the logic and the implementations of various methods defined in the EntityBase class in the ORMSupportClasses namespace. This base class derives from the central generated base class CommonEntityBase. The other class, EntityName Entity.cs/vb, for example OrderEntity.cs, is the class you work with in your code; in other words, the class you use as the type to instantiate entity objects. When you're using .NET 1.x, this is also the class where you will add custom code to the entity classes, for example extra properties. When you're using .NET 2.0, it's recommended to use partial classes instead, a new feature of .NET 2.0. You can create a partial class for either one of the generated entity classes. It's however recommended that if you're using .NET 2.0 and you're starting a new project, you use the General preset and partial classes as that will likely give you less generated code.

Note : The TwoClasses presets will overwrite the derived entity classes if these files exist, which is a change from v1.0.2005.1 and earlier versions, which didn't overwrite the derived entity classes. The contained user code regions will of course be preserved. Adding your own code to the generated classes for details on modifying the generated code.

Instantiating an existing entity
As described in the concepts , an entity is a semantic name for a group of existing data. An entity has a definition, the entity definition, which is formulated in a database table or view and, when you added that entity definition to your project, also in code, namely in the EntityName EntityBase.cs/vb class. To load the entity's data from the persistent storage, we use the generated class related to this entity's definition, and create an instance of that class and order it to load the data of the particular entity. As an example we're loading the entity identified with the customerID "CHOPS" into an object.

Using the primary key value
One way to instantiate the entity in an object is by passing all primary key values to the constructor of the entity class to use:

Page 316

C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") This will load the entity with the primary key value of "CHOPS" into the object named customer , directly from the persistent storage. LLBLGen Pro doesn't use an in-memory cache of objects, to prevent concurrency issues among multiple threads in multiple appdomains (which is the case when you run a client on two or more machines, when you have a web-farm or when your business logic is stored on multiple machines), though does provide a way to implement uniquing or caching through the Context object. See Using the context, SelfServicing for more details about the context. Another, less compact way is to use an empty entity object and to fetch it by calling its fetch method:
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity(); customer.FetchUsingPK("CHOPS"); ' [VB.NET] Dim customer As New CustomerEntity() customer.FetchUsingPK("CHOPS")

Using a related entity
Another way to instantiate this same entity is via a related entity. Let's load the order with ID 10254, which is an order of customer "CHOPS", and via that order, get an instance of the entity "CHOPS".
C# VB.NET

// [C#] OrderEntity order = new OrderEntity(10254); CustomerEntity customer = order.Customer; ' [VB.NET] Dim order As New OrderEntity(10254) Dim customer As CustomerEntity = order.Customer LLBLGen Pro automatically creates properties to retrieve related entities or collections of related entities, using an instance of a given entity. It doesn't matter what type the relation between the two entities has: 1:n, m:1, 1:1 or m:n. In this case, Customer and Order have an 1:n relationship (one customer can have multiple orders) from the Customer's point of view and a m:1 relationship (one Order can have just one customer) from the Order's point of view. A single entity property, which order.Customer is, uses the method GetSingleFieldMappedOnRelation () to actually retrieve the entity. You can use that method too, instead of the property:
C# VB.NET

// [C#] OrderEntity order = new OrderEntity(10254); CustomerEntity customer = order.GetSingleCustomer(); ' [VB.NET]

Page 317

Dim order As New OrderEntity(10254) Dim customer As CustomerEntity = order.GetSingleCustomer() If Customer is in an inheritance hierarchy, the fetch is polymorphic. This means that if the order entity, in this case order 10254, has a reference to a derived type of Customer, for example GoldCustomer, the entity returned will be of type GoldCustomer. See also Polymorphic fetches below. Load on demand/Lazy loading Once loaded, the entity is not loaded again, if you access the property again. This is called load on demand or lazy loading : the load action of the related entity (in our example 'customer') is done when you ask for it, not when the referencing entity (in our example 'order') is loaded. You can set a flag which makes the code load the related entity each time you access the property: AlwaysFetchFieldMappedOnRelation . In our example of Order and Customer, OrderEntity has a property called AlwaysFetchCustomer and CustomerEntity has a property called AlwaysFetchOrders . Default for these properties is 'false'. Setting these properties to true, will assure that the related entity is reloaded from the database each time you access the property. This can be handy if you want to stay up to date with the related entity state in the database. It can degrade performance, so use the property with care. Another way to force loading of a related entity or collection is by specifying true for the forceFetch parameter in the GetSingleFieldMappedOnRelation call, or when the property contains a collection, GetMultiFieldMappedOnRelation call. Forcing a fetch has a difference with AlwaysFetchFieldMappedOnRelation in that a forced fetch will clear the collection first, while AlwaysFetchFieldMappedOnRelation does not. A forced fetch will thus remove new entities added to the collection from that collection as these are not yet stored in the database. If you use a prefetch path to read a Customer and its related Order entities from the database, the Orders will not be re-loaded if you access the property after the fetch. A prefetch path which loads related entities makes sure that lazy loading will not undo the work the prefetch path already performed. When the related entity is not found in the database, for example Customer has an optional relation (weak relation) with Address using Customer.VisitingAddressID - Address.AddressID and myCustomer.VisitingAddress is accessed and myCustomer doesn't have a related visiting address entity, by default the generated code will return a new, empty entity, in this case a new AddressEntity instance. You can then test the Fields.State value of the returned entity, if it is a new entity or a fetched entity (by comparing the Fields.State property with EntityState.New for a new entity or EntityState.Fetched for a fetched entity). This can be cumbersome in some situations. You can tell the entity to return null (C#) or Nothing (VB.NET) instead of a new entity if the entity is not found by setting the property FieldMappedOnRelation ReturnNewIfNotFound to false. In our example of the Customer and its optional VisitingAddress field, mapped on the relation Customer.VisitingAddressID - Address.AddressID, Customer will have a property VisitingAddressReturnNewIfNotFound . Setting this property to false will make myCustomer.VisitingAddress return null (C#) or Nothing (VB.NET) if the related Address entity is not found for myCustomer. By default these flags are set to true, to avoid code breakage with existing code already in production. You can change this default in the LLBLGen Pro designer: in the preferences and project properties , change the preference (which is inherited by a new project) or project property (if you're working on an existing project, be sure you set the property on the project as well) LazyLoadingWithoutResultReturnsNew to false and regenerate your code. The code generator will now generate 'false' / False' for all FieldMappedOnRelation ReturnNewIfNotFound flags in all entities which will make sure that if an entity doesn't exist, null / Nothing is returned instead of a new entity.

Page 318

Note : Be aware that some code can trigger lazy loading while you didn't intent to. Consider Customer and Order which have an 1:n relation (and Order and Customer have a m:1 relation). The following code triggers the fetch of all orders for the myCustomer instance, while that wasn't the intention: myCustomer.Orders.Add(myOrder); while this code: myOrder.Customer = myCustomer; does the same thing, as LLBLGen Pro keeps both sides of a relation in sync, however this line of code doesn't trigger lazy loading.

Using a unique constraint's value
We just used the primary key value for the entity "CHOPS", which is the unique identifying attribute for the entity "CHOPS", to be exact. The customer entity also has a unique constraint defined on its field 'CompanyName', which therefore also is a unique identifying attribute for the same entity. We can use that field to load the same entity. Because a unique constraint which has the same types of fields as the primary key would result in the same constructor header and that would not be compilable, fetching the entity using a unique constraint is done via two steps: first create an empty entity object, then fetch the entity data using a method call. Because an entity can have more than one unique constraint, these have the fields in the unique constraint in the methodnames. In this case, the entity Customer has a unique constraint with one field, CompanyName, which is utilized by method FetchUsingUCCompanyName(companyName):
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity(); customer.FetchUsingUCCompanyName("Chop-suey Chinese"); ' [VB.NET] Dim customer As New CustomerEntity() customer.FetchUsingUCCompanyName("Chop-suey Chinese")

Using a prefetch path
An easy way to instantiate an entity can be by using a Prefetch Path, to read related entities together with the entity or entities to fetch. See for more information about Prefetch Paths and how to use them: Prefetch Paths .

Using a collection class
Another way to instantiate an entity is by creating a collection class with one or more entities of the same entity definition (entity type, like Customer) using the EntityCollection classes or via a related entity which has a 1:n relation with the entity to instantiate. For an example, please see Tutorials and Examples: How Do I? - Read all entities into a collection .

Using a Context object
If you want to get a reference to an entity object already in memory, you can use a Context object , if that object was added to that particular Context object. The example below retrieves a reference to the customer object with PK "CHOPS", if that entity was previously loaded into an entity object which was added to that Context object. If the entity object isn't in the Context object, a new entity object is returned. An example usage is shown below.
C#

Page 319

VB.NET

// C# CustomerEntity customer = (CustomerEntity)myContext.Get(new CustomerEntityFactory(), "CH OPS"); if(customer.IsNew) { // not found in context, fetch from database customer.Refetch(); } ' VB.NET Dim customer As CustomerEntity = CType(myContext.Get(New CustomerEntityFactory(), "CHOPS "), CustomerEntity) If customer.IsNew Then ' not found in context, fetch from database customer.Refetch() End If

Creating a new / modifying an existing entity
This section discusses how to create a new entity and save it to the database and how to modify an existing entity and persist the changes.

Creating an entity
Loading an entity is nice, but it has to be created before it can be loaded. To create a new entity, simply instantiate an empty entity object, in this case a new Customer:
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity(); ' [VB.NET] Dim customer As New CustomerEntity() To create the entity in the persistent storage, two things have to be done: 1) the entity's data (which is new) has to be stored in the new entity object and 2) the entity data has to be persisted / saved in the persistent storage. Let's add the customer Foo Inc. to the database, do the following:
C# VB.NET

// [C#] customer.CustomerID = "FOO"; customer.Address = "1, Bar drive"; customer.City = "Silicon Valey"; customer.CompanyName = "Foo Inc."; customer.ContactName = "John Coder"; customer.ContactTitle = "Owner"; customer.Country = "USA"; customer.Fax = "(604)555-1233"; customer.Phone = "(604)555-1234"; customer.PostalCode = "90211"; // save it customer.Save(); ' [VB.NET] customer.CustomerID = "FOO" customer.Address = "1, Bar drive"

Page 320

customer.City = "Silicon Valey" customer.CompanyName = "Foo Inc." customer.ContactName = "John Coder" customer.ContactTitle = "Owner" customer.Country = "USA" customer.Fax = "(604)555-1233" customer.Phone = "(604)555-1234" customer.PostalCode = "90211" ' save it customer.Save() Region isn't filled in, which is fine, it can be NULL, and will therefore also end up as NULL in the database. This will save the data directly to the persistent storage (database) and the entity is immediately available for other threads / appdomains targeting the same database. The entity object customer itself is marked 'out of sync', which means that the entity's data is refetched from the database when you try to read one of the object's field's value. This way, you can immediately refetch values which are set inside the database, for example default values for columns. The code is aware of sequences / identity columns and will automatically set the value for an identity / sequence column right after the Save() method returns, thus is available in the next statement after a call to Save(). If you're using a database which uses sequences, like Oracle or Firebird, be sure to define the field which should be used with a sequence as identity in the entity editor . Because the entity saved is new (customer.IsNew is true), Save() will use an INSERT query. After a successful save, the IsNew flag is set to false and the State property of the Fields object of the saved entity is set to EntityState.Fetched (if the entity is also refetched) or EntityState.OutOfSync .

Note : Fields which get their values from a trigger, from newid() or a default constraint calling a user defined function are not considered sequenced fields and these values will not be read back, so you'll have to supply a value for these fields prior to saving the entity. This isn't true for fields which are of type unique_identifier on SqlServer 2005 when the DQE is set in SqlServer 2005 compatibility mode and the field has in the database a default value of NEWSEQUENTIALID(). See: Generated code Database specific features

Note : If the entity is in a hierarchy of type TargetPerEntityHierarchy (see Concepts - Entity inheritance and relational models ) you don't have to set the discriminator value for the entity type, this is done for you automatically: just create a new instance of the entity type you want to use, and the discriminator value is automatically set and will be saved with the entity.

Modifying an entity
Modifying an entity's data is just as simple and can be done in multiple ways: 1. Loading an existing entity in memory, alter one or more fields (not sequenced fields) and call Save() 2. Create a new entity, set the primary key values, set the IsNew to false, set one or more other fields' values and call Save(). This will not alter the PK fields. 3. Via one of the UpdateMulti*() methods defined in the collection class of the entity. Option 1 is likely the most used one, since an entity might already be in memory. As with all the suggested options, the Save() method will see that the entity isn't new, and will therefore use an UPDATE query to alter the entity's data in the persistent storage. An UPDATE query will only update the changed fields in an entity that is saved, which results in efficient queries. If no fields are changed, no update is performed. A field which is set to the same value (according to Equals()) is not marked as 'changed' (i.e. the field's IsChanged flag is not set).

Page 321

If you've loaded an entity from the database into memory and you've changed one or more of its primary key fields, these fields will be updated in the database as well (except sequenced/identity columns). Changing PK fields is not recommended and changed PK fields are not propagated to related entities fetched in memory. You also can't save changed PK fields in recursive saves. An example for code using the first method:
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); customer.Phone = "(605)555-4321"; customer.Save(); ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") customer.Phone = "(605)555-4321" customer.Save() This will first load the Customer entity "CHOPS" into memory, alter one field, Phone, and then save that single field back into the persistent storage. Because the loading of "CHOPS" already set the primary key, we can just alter a field and call Save(). The Update query will solely set the table field related to the entity field "Phone". Reading an entity into memory first can be somewhat inefficient, since all we need to do is an update of an entity row in the database. Option 2 is more efficient in that it just starts an update, without first reading the data from the database. The following code performs the same update as the previous example code illustrating option 1. Even though the PK field is changed, it is not updated, because it is not previously fetched from the database.
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity(); customer.CustomerID="CHOPS"; customer.IsNew=false; customer.Phone = "(605)555-4321"; customer.Save(); ' [VB.NET] Dim customer As New CustomerEntity() customer.CustomerID = "CHOPS" customer.IsNew = False customer.Phone = "(605)555-4321" customer.Save() We have to set the primary key field, so the Update method will only update a single entity, the "CHOPS" entity. Next, we have to mark the new, empty entity object as not being new, so Save() will call the Update method, instead of the Insert method. This is done by setting the flag IsNew to false. Next is the altering of a field, in this case "Phone", and the call of Save(). This will not load the entity back in memory, but because Save() is called, it will be marked out of sync, and the next time you'll access a property of this entity's object, it will be refetched from the persistent storage. Doing updates this way can be very efficient and you can use very complex update constructs when you apply an Expression to the field(s) to update. See for more information about Expression objects for fields Field expressions and aggregates .

Page 322

Notes : This will also work for fast deletes. If you want to set an identity primary key column, you'll notice you can't do that because it is marked as read-only. Use the method entityObject.Fields[fieldindex or fieldname].ForcedCurrentValueWrite(value) . See the reference manual for details about this method (EntityField.ForcedCurrentValueWrite). Setting a field to the same value it already has will not set the field to a value (and will not mark the field as 'changed') unless the entity is new. Each entity which is saved is validated prior to the save action. This validation can be a no-op, if no validation code has been added by the developer, either through code added to the entity, or through a validator class . See Validation per field or per entity for more information about LLBLGen Pro's validation functionality. (SQLServer specific) If the entity is saved into a table which is part of an indexed view, SqlServer requires that SET ARITHABORT ON is specified prior to the actual save action. You can tell LLBLGen Pro to set that option, by calling the global method DbUtils.SetArithAbortFlag(bool) method. After each SQL statement a SET ARITHABORT OFF statement will be executed if the ArithAbort flag is set to true. Setting this flag affects the whole application. For option 3 , see Using the collection classes .

Setting the EntityState to Fetched automatically after a save
By design an entity which was successfully saved to the database gets as EntityState OutOfSync . The LLBLGen Pro runtime framework will refetch an entity which is marked OutOfSync automatically right before an entity field's property is read. This is done to make sure that default constraints, calculated fields and elements which could have been changed after the save action inside the database (for example because a database trigger ran after the save action) are reflected in the entity after the save action. If you know that this won't happen in your application, you can get a performance gain by specifying that LLBLGen Pro should mark a successfully saved entity as Fetched instead of OutOfSync. In this situation, LLBLGen Pro won't perform a fetch action to obtain the new entity values from the database. To use this feature, you've to set the static/Shared property EntityBase.MarkSavedEntitiesAsFetched to true (default is false). This will be used for all entities in your application, so if you have some entities which have to be fetched after the update (for example because they have a timestamp field), you should keep the default, false. You can also set this value using the config file of your application by adding the following line to the appSettings section of your application's config file: <add key="markSavedEntitiesAsFetched" value="true"/> You don't need to refetch an entity if it has a sequenced primary key (Identity or sequence), as these values are read back directly with the insert statement.

Saving entities recursively
All entity objects and entity collection objects in SelfServicing support recursive saves. This means that if you have an entity object, say a CustomerEntity, and its collection objects, for example customer.Orders, contain changed entities, or the entity references changed entities, these entities will be saved as well when the particular entity is saved. In SelfServicing, this logic is not enabled by default, to be backwards compatible. You have to call the Save() (entities) or SaveMulti() (entity collections) overloads which accept a boolean parameter to signal the routine to save all entities recursively or not. Pass true to the Save / SaveMulti call and the whole object graph is saved, that is: all entities reachable from the object the Save (or SaveMulti) method is called on which are changed ('dirty'). All recursive save actions are performed inside a transaction. If the saved entity (the entity the Save() method is called on) or the saved entity collection is not participating in a transaction, a new transaction is created (ADO.NET transaction, not COM+). If there is already a transaction available, it is assumed all entities to save participate already in this transaction or can participate in this transaction (i.e. are not participating in another transaction). If an error occurs during the recursive save, the current transaction is aborted and the transaction is rolled back. The logic automatically determines the order in which actions need to take place so foreign key violations do

Page 323

not occur. For example: Instantiate a Customer entity, add a new Order object to its Orders collection. Now add OrderDetails objects to the new Order object's OrderDetails collection,. You can simply save the Customer entity and all included new/'dirty' entities will be saved and any PK-FK relations will be updated/synchronized automatically. Alter the Customer object in the example above, and save the Order object. The Customer object is saved first, then the Order and then the OrderDetails objects with all PK-FK values being synchronized FK-PK synchronization This synchronization of FK-PK values is already done at the moment you set a property to a reference of an entity object, for example myOrder.Customer = myCustomer, if the entity (in this case myCustomer) is not new, or if the PK field(s) aren't sequenced fields when the entity is new. Synchronization is also performed after a save action, so identity/sequenced columns are also synchronized. If you set a foreign key field (for example Order.CustomerID) to a new value, the referenced entity by the foreign key (relation) the field is part of will be dereferenced and the field mapped onto that relation is set to null (C#) or Nothing (VB.NET). Example:
C# VB.NET

// C# OrderEntity myOrder = new OrderEntity(); CustomerEntity myCustomer = new CustomerEntity("CHOPS"); myOrder.Customer = myCustomer; // A myOrder.CustomerID = "BLONP"; // B CustomerEntity referencedCustomer = myOrder.Customer; // C ' VB.NET Dim myOrder As New OrderEntity() Dim myCustomer As New CustomerEntity("CHOPS") myOrder.Customer = myCustomer ' A myOrder.CustomerID = "BLONP" ' B Dim referencedCustomer As CustomerEntity = myOrder.Customer 'C After line 'A', myOrder.CustomerID will be set to "CHOPS", because of the synchronization between the PK of Customer and the FK of Order. At line 'B', the foreign key field CustomerID of Order is changed to a new value, "BLONP". Because the FK field changes, the referenced entity through that FK field, Customer, is dereferenced and myOrder.Customer will return null. Due to lazy loading code and because there is no current referenced customer entity, the variable referencedCustomer will be set to a new Customer entity, fetched from the database with the PK "BLONP" at line 'C'. The opposite is also true: if you set the property which represents a related entity to null (Nothing), the FK field(s) forming this relation will be set to null as well, as shown in the following example:
C# VB.NET

// C# OrderEntity myOrder = new OrderEntity(10254); CustomerEntity myCustomer = myOrder.Customer; // A myOrder.Customer = null; // B ' VB.NET Dim myOrder As New OrderEntity(10254) Dim myCustomer As CustomerEntity = myOrder.Customer ' A myOrder.Customer = Nothing 'B At line A, lazy loading will fetch the customer related to order 10254. At line B, this customer is dereferenced. This means that the FK field of order creating this relation, myOrder.CustomerId, will be set

Page 324

to null (Nothing). So if myOrder is saved after this, NULL will be saved in the field Order.CustomerId

Deleting an entity
Deleting an entity is very easy, it's as simple as Saving an entity. Simply fetch the entity into memory and call Delete(). You can also delete an entity using an entity collection or remove it from the persistent storage directly (both methods use DeleteMulti* overloads, see Deleting one or more entities from the persistent storage ) To delete it the simple way: fetch it and call delete:
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); customer.Delete(); ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") customer.Delete() It's wise to add the entity object to a transaction object if you want to be able to roll back the delete later on in your routine. See for more information about transactions the section about Transactions .

Polymorphic fetches
Already mentioned early in this section is the phenomenon called 'Polymorphic fetches'. Imagine the following entity setup: BoardMember entity has a relation (m:1) with CompanyCar. CompanyCar is the root of a TargetPerEntityHierarchy inheritance hierarchy and has two subtypes: FamilyCar and SportsCar. Because BoardMember has the relation with CompanyCar, a field called 'CompanyCar' is created in the BoardMember entity which is mapped onto the m:1 relation BoardMember - CompanyCar. In the database, several BoardMember instances have been stored, as well as several different CompanyCar instances, of type FamilyCar or SportsCar. Using lazy loading, you can load the related CompanyCar instance of a given BoardMember's instance by simply calling the 'CompanyCar' property:
C# VB.NET

// C# CompanyCarEntity car = myBoardMember.CompanyCar; ' VB.NET Dim car As CompanyCarEntity = myBoardMember.CompanyCar However, 'car' in the example above, can be of a different type. If for example the BoardMember instance in myBoardMember has a FamilyCar as company car set, 'car' is of type FamilyCar. Because the fetch action can result in multiple types, the fetch is called polymorphic . So, in our example, if 'car' is of type FamilyCar, the following code would also be correct:
C# VB.NET

// C# FamilyCarEntity car = (FamilyCarEntity)myBoardMember.CompanyCar; ' VB.NET Dim car As FamilyCarEntity = CType(myBoardMember.CompanyCar, FamilyCarEntity) Would this BoardMember instance have a SportsCar set as company car, this code would fail at runtime with a specified cast not valid exception.

FetchPolymorphic
Each entity which is in an inheritance hierarchy has a method called FetchPolymorphic , which is a static/Shared method. This method lets you fetch an entity which is a subtype of the entity you call the

Page 325

method on. For example, if CompanyCar with ID '4' is a FamilyCar, you can do the following to fetch the entity into a FamilyCar instance:
C# VB.NET

// C# FamilyCarEntity car = (FamilyCarEntity)CompanyCarEntity.FetchPolymorphic(null, 4, null); ' VB.NET Dim car As FamilyCarEntity = CType(CompanyCarEntity.FetchPolymorphic(Nothing, 4, Nothing ), FamilyCarEntity) As this method accepts a transaction, it can be handy in some cases to use this method over a constructor call. To keep things simple, you should first look at the constructor method:
C# VB.NET

// C# FamilyCarEntity car = new FamilyCarEntity(4); ' VB.NET Dim car As New FamilyCarEntity(4)

FetchPolymorphicUsingUC...
Another way to fetch an entity polymorphically is when it has a unique constraint and is in a hierarchy. You then can use the unique constraint's values to fetch an entity polymorphically similar to the FetchPolymorphic method for fetching an entity using the primary key. Say Employee has an unique constraint on 'Name'. To fetch an employee polymorphically, you can use the following code.
C# VB.NET

// C# BoardMemberEntity b = (BoardMemberEntity)EmployeeEntity.FetchPolymorphicUsingUCName(null , "J.D. Rockefeller III", null); ' VB.NET Dim b As BoardMemberEntity = CType(EmployeeEntity.FetchPolymorphicUsingUCName(Nothing, " J.D. Rockefeller III", Nothing), BoardMemberEntity)

Note : Be aware of the fact that polymorphic fetches of entities in a TargetPerEntity hierarchy (see Concepts - Entity inheritance and relational models. ) use JOINs between the root entity's target and all subtype targets when the root type is specified for the fetch. This can have an inpact on performance.

Concurrency control
There is an overloaded Save() variant which takes a predicate object, also known as a filter. This is also the case for Delete(). This filter is constructed using the objects described in Getting started with filtering and can be used to set a condition when the update (or delete, when the filter is used as a parameter to Delete()) has to take place (the filter is ignored when the entity is new). For example, a predicate object that contains a field = value compare clause for a timestamp column in the table where the entity-to-update is located. If the entity's timestamp column is not the same (if you have defined that in your predicate object passed to Save()), the save is not performed, or in the case of calling Delete() with a predicate, the delete will not take place.

Page 326

To filter on the original database values fetched into the entity to be saved, you can create for example FieldCompareValuePredicate instances which use the EntityField's DbValue property. Even though a field is changed in memory through code, the DbValue property of a field will have the original value read from the database. You can use this for optimistic concurrency schemes. See for an example the example below. If the field is NULL in the database, DbValue is null (C#) or Nothing (VB.NET). LLBLGen Pro supports another form of supplying predicates for filters during Save or Delete actions: implementing IConcurrencyPredicateFactory. You can implement this interface to produce, based on the type of action (save or delete) and the entity the predicate is for, an IPredicateExpression object which is then used as the filter for the action (Save or Delete). Each entity object has a property, ConcurrencyPredicateFactoryToUse, which can be set to an instance of IConcurrencyPredicateFactory. If specified, each Save() and Delete() call on the entity will consult this object for a filter object. This is also the case for recursive saves. If you want concurrency control deep down a recursive save, it's key that you set those object's ConcurrencyPredicateFactoryToUse property to an instance of IConcurrencyPredicateFactory. IConcurrencyPredicateFactory instances can't be shared between Adapter and SelfServicing code. Below an example implementation of IConcurrencyPredicateFactory, which returns predicates which test for equality on EmployeeID for the particular order. This will make sure the Save or Delete action will only succeed if the entity in the database has the same value for EmployeeID as the in-memory entity.
C# VB.NET

// [C#] private class OrderConcurrencyFilterFactory : IConcurrencyPredicateFactory { public IPredicateExpression CreatePredicate( ConcurrencyPredicateType predicateTypeToCreate, object containingEntity) { IPredicateExpression toReturn = new PredicateExpression(); OrderEntity order = (OrderEntity)containingEntity; switch(predicateTypeToCreate) { case ConcurrencyPredicateType.Delete: toReturn.Add(OrderFields.EmployeeID == order.Fields[(int)OrderFieldIndex.EmployeeID] .DbValue); break; case ConcurrencyPredicateType.Save: // only for updates toReturn.Add(OrderFields.EmployeeID == order.Fields[(int)OrderFieldIndex.EmployeeID] .DbValue); break; } return toReturn; } } ' [VB.NET] Private Class OrderConcurrencyFilterFactory Implements IConcurrencyPredicateFactory Public Function CreatePredicate( _ predicateTypeToCreate As ConcurrencyPredicateType, containingEntity As object) _ As IPredicateExpression Implements IConcurrencyPredicateFactory.CreatePredicate Dim toReturn As IPredicateExpression = New PredicateExpression() Dim order As OrderEntity = CType(containingEntity, OrderEntity)

Page 327

Select Case predicateTypeToCreate Case ConcurrencyPredicateType.Delete toReturn.Add(OrderFields.EmployeeID = _ order.Fields(CInt(OrderFieldIndex.EmployeeID)).DbValue) Case ConcurrencyPredicateType.Save ' only for updates toReturn.Add(OrderFields.EmployeeID = _ order.Fields(CInt(OrderFieldIndex.EmployeeID)).DbValue)) End Select Return toReturn End Function End Class

Note : In the VB.NET code above, operator overloading is used. If you're using VB.NET on .NET 1.0 or .NET 1.1, you don't have operator overloading functionality available as VB.NET for .NET 1.x doesn't support operator overloading, it was introduced in VB.NET on .NET 2.0. In the case that you're using .NET 1.x and VB.NET, create the predicates using: New FieldCompareValuePredicate(OrderFields.EmployeeID, ComparisonOperator.Equals, order.Fields(CInt(OrderFieldIndex.EmployeeID)).DbValue) During recursive saves, if a save action fails, which can be caused by a ConcurrencyPredicateFactory produced predicate, thus if no rows are affected by the save action, an ORMConcurrencyException is thrown by the save logic, which will terminate any transaction started by the recursive save. To set an IConcurrencyPredicateFactory object when an entity is created or initialized, please see the section Adding your own code to the generated classes which discusses various ways to modify the generated code to add your own initialization code which for example sets the IConcurrencyPredicateFactory instance for a particular object. You can also set an IConcurrencyPredicateFactory instance of an entity using the ConcurrencyPredicateFactoryToUse property of an EntityCollection to automatically set the ConcurrencyPredicateFactoryToUse property of an entity when it's added to the particular entity collection.

Entities, NULL values and defaults
Some datatypes, like date related datatypes and strings, are not always mandatory and are set to an unknown value. In most cases this is NULL: the fields in the table are nullable and, if these fields do not yet have a value, they're set to NULL. Nullable fields often have a 'default' value set; this is a value which is inserted by the database server when a NULL is inserted in such a column. These default values are defined in the table definition itself. .NET 1.x: no support for nullable value types In .NET 1.x, NULL values aren't usable inside .NET since a valuetype, for example a field of type int/Integer, which can be NULL in the database can't be null/Nothing in .NET 1.x. If you generate code for .NET 1.x or CF.NET 1.0, LLBLGen Pro's generated code converts all NULL values for all fields which have a ValueType as .NET type to default values for that particular ValueType. These values are defined in the Helper class 'TypeDefaultValue'. You can change these default values in the TypeDefaultValue class to other values, however keep in mind that these default values are not used most of the time: you always have to test for NULL for a given field, if it was NULL when the data was fetched from the database. To test a given field if it was NULL when you read it from the database, use TestOriginalFieldValueForNull():
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); bool contactTitleIsNull = customer.TestOriginalFieldValueForNull(CustomerFieldIndex.Cont actTitle); ' [VB.NET]

Page 328

Dim customer As New CustomerEntity() Dim contactTitleIsNull As Boolean = customer.TestOriginalFieldValueForNull(CustomerField Index.ContactTitle) The variable 'contactTitleIsNull' now contains true or false, depending on the fact if the field 'ContactTitle' for the entity "CHOPS" is NULL in the database (true) or not (false). This function will return true even if you've set the field to a new value but you have't saved the entity yet. .NET 2.0: support for Nullable(Of valueType ) types In .NET 2.0, Microsoft introduced the concept of Nullable valuetypes, which means that a field of type int/Integer or any other ValueType can be null / Nothing. By default, LLBLGen Pro generates all ValueTyped fields as Nullable(Of valueType ) if the target platform is .NET 2.0 or CF.NET 2.0. You can overrule this setting on a per-field basis by setting the preference (and project property) GenerateNullableFieldsAsNullableTypes to true or false, which controls the value of the setting for each field if the field has to be generated as nullable or not. (See: Designer - Adding and editing entities ). With Nullable types for valuetyped fields, LLBLGen Pro won't convert a null / Nothing value for a field to a default value, but will return null / Nothing from the field's property. NULL values read from the database In previous versions of LLBLGen Pro, a NULL value read from the database would result in the default value for the field's type as the in-memory value. This has changed in V2 of LLBLGen Pro: if a field is NULL in the database, the in-memory value will then become null / Nothing. This means that the CurrentValue property of the field object in the entity's Fields collection (entity.Fields[index].CurrentValue) will be null / Nothing in this situation, not a default value.

Note : Reading a value from an entity's field property (e.g. myCustomer.CompanyName), and the entity field hasn't been set to a value (which is the case in a new entity where the field hasn't been set to a value), an ORMInvalidFieldReadException is thrown, if the developer has set the static flag EntityBase(2).MakeInvalidFieldReadsFatal to true (default: false). In v1 you could get away with this and use the default value returned, but this isn't allowed anymore because nullable fields lead to different results now and that would otherwise go unnoticed when you upgrade your project, if the exception isn't thrown. Use the flag and the exception to track down code errors after migrating your v1 solution to v2. Setting a field to NULL Setting a field to NULL is easy. When you create a new entity, you simply do not supply a value for a field you want to set to NULL. The insert query will notice that the field isn't changed (because you didn't supply a value for it), and will skip the field. If you have set a default value for that column, the database engine will automatically fill in the default value for that field in the database, this is standard database behaviour. When you want to set a field of an existing entity to NULL, you have to use a special function: SetNewFieldValue(). You can set the field's value to null/Nothing and when you then save the entity, the value in the table will be NULL. You have to use this method and not a set operation on a property, because value types like int/Integer do not accept null/Nothing as a valid value. Using this method will not bypass checks, it's the same method that is used by properties to set the value for the fields related to the property. Example:
C# VB.NET

// [C#] OrderEntity order = new OrderEntity(10254); order.SetNewFieldValue((int)OrderFieldIndex.ShippingDate, null); order.Save(); ' [VB.NET] Dim order As New OrderEntity(10254) order.SetNewFieldValue(CInt(OrderFieldIndex.ShippingDate), Nothing) order.Save() On .NET 2.0, with nullable types, this is even easier:

Page 329

C#, .NET 2.0 VB.NET, .NET 2.0

// [C#], .NET 2.0 OrderEntity order = new OrderEntity(10254); order.ShippingDate = null; order.Save(); ' [VB.NET], .NET 2.0 Dim order As New OrderEntity(10254) order.ShippingDate = Nothing order.Save() Usually, you won't be needing this much: most of the time fields will be set to NULL when the entity is created and will be updated with a value somewhere during the entity's lifecycle. To test if a field is currently representing a NULL value, or better: if the entity would be saved now, does the field become NULL in the database, you can use a different method: TestCurrentFieldValueForNull():
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); customer.SetNewFieldValue((int)CustomerFieldIndex.ContactTitle, null); customer.TestCurrentFieldValueForNull(CustomerFieldIndex.ContactTitle); // returns true ' [VB.NET] Dim customer As New CustomerEntity() customer.SetNewFieldValue(CType(CustomerFieldIndex.ContactTitle, Integer), Nothing) customer.TestCurrentFieldValueForNull(CustomerFieldIndex.ContactTitle)' returns true

Note : The usage of NULLs in databases should be discouraged and NULLs should only be used for fields which are optional and often not filled in with a value. In other situations, always use a default value for a NULLable column.

Extending an entity by intercepting activity calls
During the entity's lifecycle and the actions in which the entity participates, various methods of the entity are called, and which might be a good candidate for your own logic to be called as well, for example when the entity is initialized you might want to do your own initialization as well. The entity classes offer a variety of methods for you to override so you can make your code to be called in various situations. These methods start all with On and can be found in the LLBLGen Pro reference manual in the class EntityBase . The entity classes also offer events for some situations, like the Initializing and Initialized events. If you want to perform a given action when one of these methods are called, you can override them in the generated entity classes, preferably using the methods discussed in Adding your own code to the generated classes .

Page 330

Note : OnTransactionCommit and OnTransactionRollback are called on any entity participating in the transaction, no matter if there was an action on the entity or not. To check if an entity was saved during a transaction, test the entity's Fields.State property. If it's OutOfSync, the entity was saved.

IDataErrorInfo implementation
The .NET interface IDataErrorInfo is now implemented on EntityBase. Two methods have been added to the entities: SetEntityError and SetEntityFieldError , which allows external code to set the error of a field and/or entity. If append is set to true with SetEntityFieldError, the error message is appended to an existing message for that field using a semi-colon as separator. Entity field validation, which is triggered by the entity's method SetNewFieldValue() (which is called by a property setter), sets the field error if an exception occurs or when the custom field validator fails. The error message is appended to an existing message.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 331

Generated code - Using the entity collection classes, SelfServicing
Preface
Per entity definition in a project, LLBLGen Pro will generate an entity collection class. This class is used to work on more than one entity at the same time and it is used to retrieve more than one entity of the same type from the database. This section describes the different kinds of functionality bundled in the collection classes and how to utilize that functionality in your code. Entity collections and generics In .NET 1.x, the generated collection classes derive from EntityCollectionBase . In .NET 2.0, the generated collection classes derive from EntityCollectionBase(Of EntityType ), for example: CustomerCollection derives from EntityCollectionBase<CustomerEntity> (C#) or EntityCollectionBase(Of CustomerEntity). This has consequences for inheritance. If there are two entities, Employee and Manager, and Manager is a subtype of Employee, then ManagerCollection doesn't derive from EmployeeCollection, but it derives directly from EntityCollectionBase<ManagerEntity>. The reason for this is that the collection classes in SelfServicing aren't a generic type. This is done to be sure code which is generated with a previous version of LLBLGen Pro is migratable to v2 of LLBLGen Pro without much problems. If ManagerCollection would inherit from EmployeeCollection, ManagerCollection would still be of type EntityCollectionBase<EmployeeEntity>. It was a design decision to keep the generated collection classes as non-generic classes, e.g. ManagerCollection instead of EntityCollection<ManagerEntity>, to keep backwards compatibility for users who migrate from an older version of LLBLGen Pro to v2.

Entity retrieval into an entity collection object
Entity collection objects can be filled with entities retrieved from the database using several ways. Below we'll walk you through the ones you will use the most.

Using a related entity
The easiest way to retrieve a set of entities in an entity collection class, is by using a related entity, which in turn is used as a filter. For example, let's use our user "CHOPS" again and let's retrieve all the order entities for that customer:
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); OrderCollection orders = customer.Orders; ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") Dim orders As OrderCollection = customer.Orders The entity inside 'customer' is used to filter the orders in the persistent storage and retrieve them as entity instances in the OrderCollection 'orders'. The collection is filled internally by calling an overloaded version of customer.GetMultiOrders(). That method will do the actual retrieval. You can call GetMultiOrders() yourself directly too:
C# VB.NET

// [C#]

Page 332

CustomerEntity customer = new CustomerEntity("CHOPS"); OrderCollection orders = customer.GetMultiOrders(false); ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") Dim orders As OrderCollection = customer.GetMultiOrders(False) 'false/False' is passed in as parameter, which tells GetMultiOrders to fetch the entities from the persistent storage if the orders for this customer aren't already fetched, and just return the collection if the orders are already fetched. This is called lazy loading or Load on demand , since the collection of orders is loaded when you ask for it, not when the customer is loaded. This improves performance. When you specify true/True, GetMultiOrders() will always refetch the related order entities. This can be handy if you want to refresh the set of orders in a customer you hold in memory. In our example of Customer and an Orders collection, the CustomerEntity has a property called AlwaysFetchOrders. Default for this property is 'false'. Setting this property to to true, will thus load the related entity each time you access the property. This can be handy if you want to stay up to date with the related entity state in the database. It can degrade performance, so use the property with care. Forcing a fetch has a difference with AlwaysFetchFieldMappedOnRelation in that a forced fetch will clear the collection first, while AlwaysFetchFieldMappedOnRelation does not. A forced fetch will thus remove new entities added to the collection from that collection as these are not yet stored in the database. GetMultiOrders() has more overloads than this one. You can, for example, specify an extra filter using a predicate expression, which lets you filter even further on the orders, thus for example all orders before a given date. That will result in all orders for that particular customer created before a given date. In some overloads it also accepts an entity factory object. You can use that to create special Order entity classes using the GetMultiOrders() routine: the entity factory object will be used by the data retriever to create a new entity for every read datarow, in this case order data. You probably will not use this option unless you want to extend the framework dramatically. To apply sorting and a maximum number of items to return for every time you do customer.Orders for that particular customer object, you can set the sortclauses and the maximum number of entities to return by calling customer.SetCollectionParametersOrders():
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); ISortExpression sorter = new SortExpression(OrderFields.OrderDate | SortOperator.Descend ing); customer.SetCollectionParametersOrders(10, sorter); OrderCollection orders = customer.GetMultiOrders(false); ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") Dim sorter As ISortExpression = New SortExpression( _ New SortClause(OrderFields.OrderDate, SortOperator.Descending)) customer.SetCollectionParametersOrders(10, sorter) Dim orders As OrderCollection = customer.GetMultiOrders(False) Now a call to GetMultiOrders or customer.Orders (which is used in databinding scenarios also) will sort the orders on OrderDate, descending and will return a maximum of 10 order entities. If Order is in an inheritance hierarchy, the fetch is polymorphic. This means that if the customer entity, in this case customer "CHOPS", has references to instances of different derived types of Order, every instance in customer.Orders is of the type it represents, which effectively means that not every instance in Orders is of the same type. See for more information about polymorphic fetchs also Polymorphic fetches .

Using a prefetch path
An easy way to retrieve a set of entities can be by using a Prefetch Path, to read related entities together with the entity or entities to fetch. See for more information about Prefetch Paths and how to use them: Prefetch Paths .

Page 333

Using the collection object
The most flexible way to retrieve a set of entities in an entity collection is by simply using the entity collection object and its many methods to retrieve entities of the type related to that particular entity collection. Let's concentrate on the OrderCollection. The entity order has a rich set of different relationships, with Customer, Employee and Shipper (m:1 relations), with OrderDetails (1:n relation) and with Product (m:n relation over OrderDetail) LLBLGen Pro will add retrieval methods to the entity collection object to utilize these relationships (except 1:1 and 1:n relationships which always will result in a single instance) to retrieve a set of entities fast. Lets look at the two relation types which will end up in methods in the collection classes: m:1 and m:n. Using the m:1 relations First the m:1 relationships. All entities with an m:1 relationship with the current entity, in this case Order, will be added as filter entities to a single method which has several overloads: GetMultiManyToOne(). This way you can filter on orders by specifying one or more (or all) of the related entities as filters. In the case of the Order entity, the GetMultiManyToOne() overloads accept all those three entities, and if one is specified, the values of that entity's field(s) which are directly related to the Order entity (for customer this is for example the CustomerID) are used as filters, defining which orders to load into the collection. As an example, we'll load all orders of the customer "CHOPS" which were placed by the employee with employeeID 1.
C# VB.NET

// [C#] CustomerEntity customer = new CustomerEntity("CHOPS"); EmployeeEntity employee = new EmployeeEntity(1); OrderCollection orders = new OrderCollection(); orders.GetMultiManyToOne(customer, employee, null); ' [VB.NET] Dim customer As New CustomerEntity("CHOPS") Dim employee As New EmployeeEntity(1) Dim orders As New OrderCollection() orders.GetMultiManyToOne(customer, employee, Nothing) We pass in two existing entity objects, which both have their entity data loaded into memory, as filters to the GetMultiManyToOne() routine and we pass 'null/Nothing' for the Shipper entity, which will tell the dynamic query engine (DQE) to construct a retrieval query with a filter on just customer and employee. The power of the relations is used to construct filters which do not require you to construct relation collections or predicate expressions. If you want to limit the number of objects returned in the collection, want to sort the objects before they're added to the collection or want to filter the objects further (for example in this case: all orders before a given date), you can specify a number for the maximum number of objects to return (0 for all objects), a set of sortclauses and an extra predicate expression, using one of the overloaded GetMultiManyToOne methods. See for details about sortclauses and predicate expressions Getting started with filtering and Sorting . Using the m:n relations When an object has one or more m:n (many to many) relationships with other entities, LLBLGen Pro will also generate easy to use filter methods to filter objects, using the related entity. Per entity related with an m:n relation there is one GetMultiManyToManyUsingField mapped on m:n relation which accepts an instance of the related entity for a filter. Each GetMultiManyToMany* routine has an overload which accepts a maximum delimiter for limiting the number of objects returned, and it accepts a set of sortclauses as well, to sort the objects before they're added to the collection. Let's retrieve all orders which contain the purchase of a product X with productID 10. In the Order entity, we named the field mapped on the m:n relation Order - Product 'Products', which thus ends up in the method name:
C# VB.NET

// [C#]

Page 334

ProductEntity product = new ProductEntity(10); OrderCollection orders = new OrderCollection(); orders.GetMultiManyToManyUsingProducts(product); ' [VB.NET] Dim product As New ProductEntity(10) Dim orders As New OrderCollection() orders.GetMultiManyToManyUsingProducts(product) We also could have used the product entity itself and retrieve the same set of orders:
C# VB.NET

// [C#] ProductEntity product = new ProductEntity(10); OrderCollection orders = product.Orders; ' [VB.NET] Dim product As New ProductEntity(10) Dim orders As OrderCollection = product.Orders or even:
C# VB.NET

// [C#] ProductEntity product = new ProductEntity(10); OrderCollection orders = product.GetMultiOrders(false); ' [VB.NET] Dim product As New ProductEntity(10) Dim orders As OrderCollection = product.GetMultiOrders(False) There are multiple ways to retrieve the same data in the framework LLBLGen Pro generates for you. It's up to you which one you'll use in which situation.

Total control: GetMulti()
LLBLGen Pro also generates a method called GetMulti() with several overloads, which allows you complete freedom of how you want to retrieve the entities: it accepts a predicate expression as filter, sortclauses for sorting, a value for the maximum number of entities to retrieve and if you use a multi-entity filter, a relations collection. GetMulti() has various overloads to ease the way to call the method. See for an example the Multi-entity filters section in Advanced filtering or the small example below, which simply passes null/Nothing as the predicate to GetMulti(), making GetMulti() return all entities, thus no filtering will be performed:
C# VB.NET

// [C#] OrderCollection orders = new OrderCollection(); orders.GetMulti(null); // all orders will be read into the collection ' [VB.NET] Dim orders As New OrderCollection() orders.GetMulti(Nothing) ' all orders will be read into the collection

Entity data manipulation using collection classes
Manipulating the entity data of more than one entity at once can be cumbersome when you work with objects that have to be loaded into memory: all entities you want to manipulate have to be instantiated into memory, you have to alter the fields of these objects and then save them individually. LLBLGen Pro offers

Page 335

functionality to work on entity data directly in the persistent storage. This opens up the possibility to do bulk updates or bulk deletes with a single call to a method, greatly reducing the database traffic and increasing performance. It also improves concurrency safety among threads, because you alter data directly in the shared repository, so other threads will see changes immediately.

Updating a set of entities in the persistent storage
LLBLGen Pro generates two types of update methods: a generic one, UpdateMulti() and one for the m:1 relations the entity has, UpdateMultiManyToOne(). Both update methods work practically the same: you specify an entity with the new values for the fields you want to change (the framework will update the fields which have a changed value) and a filter to narrow down the entities to update the fields of. The UpdateMulti() takes a predicate expression to filter the entity rows to update, an entity object which is used to supply the new values and also to determine which fields have to be updated, and if you use a filter spanning multiple entities, you can supply a relation collection in an overloaded version. The UpdateMultiManyToOne also accepts an entity with the new values but instead of accepting a predicate expression and relation collection, it accepts a set of related entities. For example, the Order entity accepts as a filter in its UpdateMultiManyToOne, a customer entity, employee entity and a shipper entity, just like the GetMultiManyToOne() methods. As an example, let's update all order rows which have an order detail row with a product with productID 10 and have an OrderID larger than 11000 by setting the EmployeeID to the value 6. This is a multi-entity filter update, so we have to construct a predicate expression, a relation collection and a new order entity with the values to update in the order entity rows filtered out by the specified filter. The UpdateMulti*() routines will check each field in the passed in entity object. If the field has been changed since the instantiation of the entity object that field will be updated in all filtered out entities and will be set to the value specified as the field value. In the following example the code is given to perform such an update.
C# VB.NET

// [C#] OrderCollection orders = new OrderCollection(); OrderEntity orderUpdate = new OrderEntity(); orderUpdate.EmployeeID = 6; RelationCollection relationsToUse = new RelationCollection(); relationsToUse.Add(OrderEntity.Relations.OrderDetailsEntityUsingOrderID); IPredicateExpression selectFilter = new PredicateExpression(OrderDetailsFields.ProductID ==10)); selectFilter.AddWithAnd(OrderFields.OrderID > 11000)); int amountRowsAffected = orders.UpdateMulti(orderUpdate, selectFilter, relationsToUse); ' [VB.NET] Dim orders As New OrderCollection() Dim orderUpdate As New OrderEntity() orderUpdate.EmployeeID = 6 Dim relationsToUse As New RelationCollection() relationsToUse.Add(OrderEntity.Relations.OrderDetailsEntityUsingOrderID) Dim selectFilter As IPredicateExpression = New PredicateExpression( _ OrderDetailsFields.ProductID = 10)) selectFilter.AddWithAnd(OrderFields.OrderID > 11000)) Dim amountRowsAffected As Integer = orders.UpdateMulti(orderUpdate, selectFilter, relati onsToUse)

Page 336

Note : In the VB.NET code above, operator overloading is used. If you're using VB.NET on .NET 1.0 or .NET 1.1, you don't have operator overloading functionality available as VB.NET for .NET 1.x doesn't support operator overloading, it was introduced in VB.NET on .NET 2.0. In the case that you're using .NET 1.x and VB.NET, create the predicates using the pattern: New FieldCompareValuePredicate(OrderDetailsFields.ProductID, ComparisonOperator.Equals, 10) You'll notice the return value for UpdateMulti, which is an integer. The value returned will be equal to the number of rows (entities) affected by the update statement executed. If the RDBMS has been setup to stop row counting, this value will be -1. If no rows were affected, this value will be 0. The UpdateMultiManyToOne() method works similar but accepts one or more entities for filtering. See the GetMultiManyToOne description earlier on this page. Because a single statement is executed, it is automatically run in a transaction. The DQE should take care of a new database transaction if the database server doesn't support transaction per statement (all major databases do support this, including SqlServer).

Updating entities in a collection in memory
When you have loaded a set of entities in a collection and for example have bound this collection to a datagrid, the user probably has altered one or more objects' fields in the collection. You can also alter the fields yourself by looping through the objects inside the collection. When you want to save these changes to the persistent storage, you can use all save methods of the objects inside the collection, but you can also use the SaveMulti () method which walks all objects inside the collection and if the object is 'dirty' (which means, it's been changed and should be updated in the persistent storage) it is saved. This is all done in a transaction if no transaction is currently available. (See for more information about transactions the section Transactions ). Entity collections support saving entities recursively , as discussed in Saving entities recursively section in Using the entity classes. Just pass true for the recurse parameter in one of SaveMulti()'s overloads and all entities are saved recursively. This feature is not enabled by default for backwards compatibility reasons, so if you want to save entities recursively, use the overload of SaveMulti() to be able to specify true for recurse.

Deleting one or more entities from the persistent storage
If you want to delete one or more entities from the persistent storage the same problem as with updating a set of entities appears: you first have to load them into memory, call Delete() and they'll be deleted. To delete a set of entities from the persistent storage directly, you can use DeleteMulti() overloads or the DeleteMultiManyToOne method to achieve your goal. All DeleteMulti*() methods work directly on the persistent storage except one, the DeleteMulti() method which does not take any parameters. That one works with the objects inside the collection and deletes them one by one from the persistent storage using an own transaction if the current collection isn't part of an existing transaction. (See for more information about transactions the section Transactions ). The DeleteMulti*() methods which do accept parameters and thus work on the persistent storage work the same as the UpdateMulti*() methods, except of course the DeleteMulti*() methods do not accept an entity with changed fields. See for an example how to filter rows for DeleteMulti*() the UpdateMulti() example given earlier on this page.

Note : DeleteMulti(*) is not supported for entities which are in a hierarchy of type TargetPerEntity (See Concepts - Entity inheritance and relational models. ). This is by design, as the delete action isn't possible in one go with proper checks due to referential integrity issues.

Client side sorting
In v2 of LLBLGen Pro, the entity collections don't implement IBindingList anymore, as the entity collections use EntityView classes to bind to grids and other controls and also let these views do the filtering and sorting of the entity collection data. To keep backwards compatibility, the Sort() methods of the entity collection classes have been kept and work as they did in previous version of LLBLGen Pro. It's recommended you use an EntityView class to sort and filter an entity collection instead of using the Sort()

Page 337

methods directly on an entity collection. See for more information about EntityView classes: Generated code - using entity views with entity collections . To sort a fetched collection in memory, without going back to the database, use the entity collection method Sort (there are various overloads). This method uses internally the ArrayList's QuickSort method on the property specified (either by field index or property name). Two overloads also accept an IComparer object, which will then sort the entities based on the implementation of that IComparer object, which you can supply yourself. Below is an example how to sort a fetched CustomerCollection in memory, on company name.
C# VB.NET

// C# CustomerCollection customers = new CustomerCollection(); customers.GetMulti(null); customers.Sort((int)CustomerFieldIndex.CompanyName, ListSortDirection.Descending); ' VB.NET Dim customers As New CustomerCollection() customers.GetMulti(Nothing) customers.Sort(CInt(CustomerFieldIndex.CompanyName), ListSortDirection.Descending) The Sort method can sort on one property/field at a time. Use for an easier way to sort an entity collection an entity view: Generated code - using entity views with entity collections

Finding entities inside a fetched entity collection
Although it's recommended to use EntityView objects to filter and sort an in-memory entity collection object, it sometimes can be helpful to just have a quick way to find in an in-memory entity collection an entity or group of entities matching a filter. The entity collection classes offer this facility through the method FindMatches(IPredicate) . FindMatches is a method which accepts a normal LLBLGen Pro predicate (see for more information about predicates Generated code - getting started with filtering ) and returns a list of indexes of all entities matching that predicate. As a PredicateExpression is also a predicate, you can specify a complex filter, including filters on non-field properties, to find back the entities you're looking for. On .NET 1.x, FindMatches returns an ArrayList. On .NET 2.0, FindMatches returns a List<int> / List(Of Integer). The following example finds all indexes of customer entities from the UK in the fetched entity collection of customers. FindMatches will perform an in-memory filter, it won't go to the database.
C# VB.NET

// C# IPredicate filter = (CustomerFields.Country == "UK"); ArrayList indexes = myCustomers.FindMatches(filter); ' VB.NET Dim filter As new FieldCompareValuePredicate(CustomerFields.Country, ComparisonOperator. Equal, "UK") Dim indexes As ArrayList = myCustomers.FindMatches(filter)

Note : If you're using VB.NET and you're using .NET 2.0, you can use the symplified syntaxis using operator overloading: Dim filter As IPredicate = (CustomerFields.Country = "UK")

Page 338

Note : When using FieldCompareValuePredicate with FindMatches, be sure to specify the value in the same type as teh value of the field. For example, if the field is of type Int64, and you specify as value to compare the value 1, you'll be comparing an Int64 with an Int32, which will fail. Instead, specify the value, 1, as an Int64 as well. FindMatches is the same routine which is also used by EntityViews to find the entities which should belong in the view. As the routine is defined virtual / Overridable, you can tweak the way the entities are matched.

Hierarchical projections of entity collections
LLBLGen Pro allows you to create projections of the full graph of all the entities inside a given entity collection onto a DataSet or a Dictionary object. A hierarchical projection is a projection where all entities in the entity collection plus all their related entities and so on, are grouped together per entity type. Say you have the following graph in memory: a set of CustomerEntity instances, which contain each a set of OrderEntity instances and each OrderEntity instance refers to an EmployeeEntity instance. This projection functionality is implemented on the entity collection, in the method CreateHierarchicalProjection . It's implemented on the entity collection classes (e.g. CustomerCollection) and not on the EntityView class because it affects related entities as well, while EntityView is a view of 1-level deep on an entity collection. With LLBLGen Pro it's possible to project this graph onto a DataSet which will result in per entity type a new DataTable object with all instances of that entity type (and the data relations setup correctly). You can also project it onto a Dictionary (Hashtable in .net 1.x) with per entity type an entity collection which contains the entities of that type. Projections are defined in instances of the IViewProjectionData interface which is implemented in the ViewProjectionData class. This class combines per type projections (as shown below in the example) which are then used as one projection on the complete graph. By default, when projecting to a DataSet, only the entity types which have instances in the graph get a DataTable in the resulting DataSet. If you want to have a DataSet where there are always an expected number of DataTable instances (so for entities which aren't in the graph, they're empty), you can pre-create the DataSet and pass the pre-created DataSet to the projection routine. LLBLGen Pro's runtime library contains helper routines to produce an empty DataSet with empty DataTables, the correct columns and the proper DataRelation objects setup based on a prefetch path specified. Please consult the LLBLGen Pro Reference Manual for the GeneralUtils class' ProduceEmptyDataSet and ProduceEmptyDataTable routines.

Examples
The following examples will show you both projections (to DataSet and to Dictionary) of the earlier described graph of customers - Orders - Employees. The examples will first fetch the complete graph of customers, orders and employees and will then create a projection of that graph. Usage of custom projections per property and additional filters is also shown by the examples. Please refer to the LLBLGen Pro reference manual for details about the generic ViewProjectionData class and its constructors. .NET 1.x users should use ArrayList instances instead of List(Of T) and should use the non-generic ViewProjectionData class.

Projection to DataSet
C# VB.NET

// C# CustomerCollection customers = new CustomerCollection(); PrefetchPath path = new PrefetchPath(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es); customers.GetMulti(null, path); // setup projections per type. List<IEntityPropertyProjector> customerProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity));

Page 339

// add an additional projector so the destination DataTable will have an additional col umn called 'IsNew' with // the value of the IsNew property of the customer entities. customerProjections.Add(new EntityPropertyProjector(new EntityProperty("IsNew"), "IsNew ")); List<IEntityPropertyProjector> orderProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.OrderEntity)); List<IEntityPropertyProjector> employeeProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.EmployeeEntity)); List<IViewProjectionData> projectionData = new List<IViewProjectionData>(); // create the customer projection information. Specify a filter so only customers from Germany // are projected. projectionData.Add(new ViewProjectionData<CustomerEntity>( customerProjections, (CustomerFields.Country == "Germany"), true)); projectionData.Add(new ViewProjectionData<OrderEntity>(orderProjections, null, false)); projectionData.Add(new ViewProjectionData<EmployeeEntity>(employeeProjections)); DataSet result = new DataSet("projectionResult"); customers.CreateHierarchicalProjection(projectionData, result); ' VB.NET Dim customers As New CustomerCollection() Dim path As New PrefetchPath(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es) customers.GetMulti(Nothing, path) ' setup projections per type. Dim customerProjections As List(Of IEntityPropertyProjector) = EntityFields.ConvertToPr ojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity)) ' add an additional projector so the destination DataTable will have an additional colu mn called 'IsNew' with ' the value of the IsNew property of the customer entities. customerProjections.Add(New EntityPropertyProjector(New EntityProperty("IsNew"), "IsNew ")) Dim orderProjections As List(Of IEntityPropertyProjector) = EntityFields.ConvertToProj ectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.OrderEntity)) Dim employeeProjections As List(Of IEntityPropertyProjector) = EntityFields.ConvertToPr ojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.EmployeeEntity)) Dim projectionData As New List(Of IViewProjectionData)() ' create the customer projection information. Specify a filter so only customers from G ermany ' are projected. projectionData.Add(New ViewProjectionData(Of CustomerEntity)( _ customerProjections, (CustomerFields.Country = "Germany"), True)) projectionData.Add(New ViewProjectionData(Of OrderEntity)(orderProjections, Nothing, Fa lse)) projectionData.Add(New ViewProjectionData(Of EmployeeEntity)(employeeProjections)) Dim result As New DataSet("projectionResult") customers.CreateHierarchicalProjection(projectionData, result) The same projectors as used with the projection to the DataSet are usable with a projection to a Dictionary,

Page 340

which is almost equal to the DataSet example. .NET 1.x users should use a Hashtable object instead of a Dictionary object.

Projection to Dictionary
C# VB.NET

// C# CustomerCollection customers = new CustomerCollection(); PrefetchPath path = new PrefetchPath(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es); customers.GetMulti(null, path); // setup projections per type. List<IEntityPropertyProjector> customerProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity)); // add an additional projector so the destination DataTable will have an additional col umn called 'IsNew' with // the value of the IsNew property of the customer entities. customerProjections.Add(new EntityPropertyProjector(new EntityProperty("IsNew"), "IsNew ")); List<IEntityPropertyProjector> orderProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.OrderEntity)); List<IEntityPropertyProjector> employeeProjections = EntityFields.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.EmployeeEntity)); List<IViewProjectionData> projectionData = new List<IViewProjectionData>(); // create the customer projection information. Specify a filter so only customers from Germany // are projected. projectionData.Add(new ViewProjectionData<CustomerEntity>( customerProjections, (CustomerFields.Country == "Germany"), true)); projectionData.Add(new ViewProjectionData<OrderEntity>(orderProjections, null, false)); projectionData.Add(new ViewProjectionData<EmployeeEntity>(employeeProjections)); Dictionary<Type, IEntityCollection> projectionResults = new Dictionary<Type, IEntityCol lection>(); customers.CreateHierarchicalProjection(projectionData, projectionResults); ' VB.NET Dim customers As New CustomerCollection() Dim path As New PrefetchPath(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders).SubPath.Add(OrderEntity.PrefetchPathEmploye es) customers.GetMulti(Nothing, path) ' setup projections per type. Dim customerProjections As List(Of IEntityPropertyProjector) = EntityFields.ConvertToPr ojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.CustomerEntity)) ' add an additional projector so the destination DataTable will have an additional colu mn called 'IsNew' with ' the value of the IsNew property of the customer entities. customerProjections.Add(New EntityPropertyProjector(New EntityProperty("IsNew"), "IsNew ")) Dim orderProjections As List(Of IEntityPropertyProjector) = EntityFields.ConvertToProj ectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.OrderEntity))

Page 341

Dim employeeProjections As List(Of IEntityPropertyProjector) = EntityFields.ConvertToPr ojectors( _ EntityFieldsFactory.CreateEntityFieldsObject(EntityType.EmployeeEntity)) Dim projectionData As New List(Of IViewProjectionData)() ' create the customer projection information. Specify a filter so only customers from G ermany ' are projected. projectionData.Add(New ViewProjectionData(Of CustomerEntity)( _ customerProjections, (CustomerFields.Country = "Germany"), True)) projectionData.Add(New ViewProjectionData(Of OrderEntity)(orderProjections, Nothing, Fa lse)) projectionData.Add(New ViewProjectionData(Of EmployeeEntity)(employeeProjections)) Dim projectionResults As New Dictionary(Of Type, IEntityCollection)() customers.CreateHierarchicalProjection(projectionData, projectionResults)

Note : If you just want a structure with per entity type a collection with all the instances of that type in the entity graph, so not really a projection to new copies of the entities, please use the routine ObjectGraphUtils.ProduceCollectionsPerTypeFromGraph . The ObjectGraphUtils class is located in the ORMSupportClasses namespace and contains a variety of routines working on entity graphs. Please see the LLBLGen Pro reference manual for details on this class and this method.

Tracking entity remove actions
Removing an entity from a collection by calling entitycollection .Remove (toRemove) or entitycollection .RemoveAt (index) is an ambiguous action: do you want to remove the entity from the collection to further process the entities left, or do you want to get rid of the entity completely, both in-memory and also in the database? This is the reason why LLBLGen Pro doesn't perform deletes on the database automatically if you remove an entity from a collection, you have to explicitly specify what entities to remove. Tracking which entities are removed from an entity collection to be removed from the database can be a bit cumbersome if the collection is bound to a grid for example. To overcome this, LLBLGen Pro has a feature which makes an entity collection track the entities removed from it by using another entity collection. This way, you can keep track of which entities are removed from the entity collection and pass them on to a Unit of work object for persistence in one transaction together with the rest of the entities which have changed. The extra collection is necessary because an entity which is removed from the collection isn't there anymore, so it can't be referred to by the collection itself. To enable removal tracking in an entity collection, set its RemovedEntitiesTracker property to a collection into which you want to track the removed entities from the collection. This collection can then be added to a UnitOfWork object for deletion by using the method unitofwork .AddCollectionForDelete (collectionWithEntitiesToDelete) or you can delete the entities by calling entitycollection .DeleteMulti () on the entity collection with the tracked removed entities. The following example illustrates this.
C# VB.NET

// C# // First fetch all customers from Germany with their orders. CustomerCollection customers = new CustomerCollection(); PrefetchPath path = new PrefetchPath(EntityType.CustomerEntity); path.Add(CustomerEntity.PrefetchPathOrders); customers.GetMulti(CustomerFields.Country == "Germany", path);

Page 342

// we now will add a tracker collection to the orders collection of customer 0. OrderCollection tracker = new OrderCollection(); customers[0].Orders.RemovedEntitiesTracker = tracker; // after this, we can do this: customers[0].Orders.Remove(myOrder); // and myOrder is removed from the in-memory collection customers[0].Orders // and it's placed in tracker. We can now delete the entities in tracker // by using a UnitOfWork object or by calling tracker.DeleteMulti().

' VB.NET ' First fetch all customers from Germany with their orders. Dim customers As New CustomerCollection() Dim path As New PrefetchPath(EntityType.CustomerEntity) path.Add(CustomerEntity.PrefetchPathOrders) customers.GetMulti(CustomerFields.Country = "Germany", path) ' we now will add a tracker collection to the orders collection of customer 0. Dim tracker As New OrderCollection() customers(0).Orders.RemovedEntitiesTracker = tracker ' after this, we can do this: customers(0).Orders.Remove(myOrder) ' and myOrder is removed from the in-memory collection customers[0].Orders ' and it's placed in tracker. We can now delete the entities in tracker ' by using a UnitOfWork object or by calling tracker.DeleteMulti().

Note : Tracking removal of an entity isn't used by the Clear () method, because Clear is often used to clean up a collection and not to remove entities from the database, so to avoid false positives and the deletion of entities which weren't suppose to be deleted, removal tracking isn't available for the Clear method.

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 343

Generated code - Using the EntityView class, SelfServicing
Preface
The EntityView is a class which is used to create in-memory views on an entity collection object and allows you to filter and sort an in-memory entity collection without actually touching the data inside the entity collection. A collection can have multiple EntityView objects, similar to the DataTable - DataView combination. This section describes how to use the EntityView class in various different scenarios. For clarity, the .NET 1.x syntaxis is used, unless stated otherwise. In .NET 2.0, The EntityView class is a generic class, of type EntityView(Of TEntity), where TEntity is an entity class which derives (indirectly) from EntityBase and implements IEntity, which all generated entity classes do.

DataBinding and EntityViews
Entity collections don't bind directly to a bound control. They always bind through their EntityView object (returned by the property DefaultView, see below). This is a change from the approach taken by LLBLGen Pro 1.0.2005.1 and earlier, where an entity collection was always bound directly to a bound control. The EntityView approach allows you to create multiple EntityViews on a single entity collection and all bind them to different controls as if they're different sets of data.

Creating an EntityView instance
Creating an EntityView object is simple
C#, .NET 1.x VB.NET, .NET 1.x

// C#, .NET 1.x CustomerCollection customers = new CustomerCollection(); customers.GetMulti(null); // fetch all Customers EntityView customerView = new EntityView(customers); ' VB.NET, .NET 1.x Dim customers As New CustomerCollection() customers.GetMulti(Nothing) ' fetch all Customers Dim customerView As New EntityView(customers) With .NET 2.0, you've to define the EntityView with the explicit type of the collection's containing entity type, in this case CustomerEntity:
C#, .NET 2.0 VB.NET, .NET 2.0

// C#, .NET 2.0 EntityView<CustomerEntity> customerView = new EntityView<CustomerEntity>(customers); ' VB.NET, .NET 2.0 Dim customerView As New EntityView(Of CustomerEntity)(customers) For the rest of the section, unless stated otherwise, for .NET 2.0 code, EntityView can be replaced with EntityView(Of T) This creates an EntityView object on the entity collection customers, so it lets you view the data in the entity collection 'customers'. EntityView objects don't contain any data: all data you'll be able to access through an EntityView is actually data residing in the related entity collection.

Page 344

You can also use the entity collection's DefaultView property to create an EntityView. This is similar to the DataTable's DefaultView property: every time you read the property, you'll get the same view object back. This is also true for the entity collection's DefaultView property:
C#, .NET 1.x VB.NET, .NET 1.x C#, .NET 2.0 VB.NET, .NET 2.0

// C#, .NET 1.x CustomerCollection customers = new CustomerCollection(); customers.GetMulti(null); // fetch all Customers EntityView customerView = customers.DefaultView; ' VB.NET, .NET 1.x Dim customers As New CustomerCollection() customers.GetMulti(Nothing) ' fetch all Customers Dim customerView As EntityView = customers.DefaultView // C#, .NET 2.0 CustomerCollection customers = new CustomerCollection(); customers.GetMulti(null); // fetch all Customers IEntityView customerView = customers.DefaultView; // or: // EntityView<CustomerEntity> customerView = customers.DefaultView; ' VB.NET, .NET 2.0 Dim customers As New CustomerCollection() customers.GetMulti(Nothing) ' fetch all Customers Dim customerView As IEntityView = customers.DefaultView ' or: ' Dim customerView As EntityView(Of CustomerEntity) = customers.DefaultView Instead of using the EntityView class, you can use the IEntityView interface, if you for example don't know the generic type in .NET 2.0 code. The EntityView constructor has various overloads which let you specify an initial filter and / or sort expression. You can also set the filter and / or sort expression later on as described below. Please familiar yourself with the various methods and properties of the EntityView class, by checking its entry in the LLBLGen Pro reference manual.

Filtering and sorting an EntityView
The purpose of an EntityView is to give you a 'view' based on a filter and / or a sortexpression on an inmemory entity collection. Which data contained in the related entity collection is available to you through a particular EntityView object depends on the filter set for the EntityView. In which order the data is available to you is controlled by the set sort expression. As the related collection is not touched, you can have as many EntityView objects on the same entity collection, all exposing different subsets of the data in the entity collection, in different order. Filtering and sorting an EntityView is done through normal LLBLGen Pro predicate and sortclause classes. See for more information about predicate classes: Getting started with filtering and The predicate system . The following example filters the aforementioned customers collection on all customers from the UK:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# IPredicate filter = (CustomerFields.Country == "UK"); customerView.Filter = filter;

Page 345

' VB.NET .NET 1.x Dim filter As New FieldCompareValuePredicate(CustomerFields.Country, ComparisonOperator. Equal, "UK") customerView.Filter = filter ' VB.NET .NET 2.0 Dim filter As IPredicate = (CustomerFields.Country = "UK") customerView.Filter = filter You also could have specified this filter with the EntityView constructor. As soon as the EntityView's property Filter is set to a value, the EntityView object resets itself and will apply the set IPredicate to the related entity collection and all matching entity objects will be available through the EntityView object. This is similar to the EntityView's sorter. Let's sort our filtered EntityView on CompanyName, ascending. For more information about sortclauses and sortexpression objects, please see: Generated code - Sorting .
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# ISortExpression sorter = new SortExpression(CustomerFields.CompanyName | SortOperator.As cending); customerView.Sorter = sorter; ' VB.NET .NET 1.x Dim sorter As New SortExpression(New SortClause(CustomerFields.CompanyName, SortOperator .Ascending)) customerView.Sorter = sorter ' VB.NET .NET 2.0 Dim sorter As New SortExpression(CustomerFields.CompanyName Or SortOperator.Ascending) customerView.Sorter = sorter

.NET 2.0+: Use a Predicate(Of T) or Lambda expression (.NET 3.5) for a filter
In .NET 2.0, Microsoft introduced a new class called Predicate<T> . This is a class which is used in a couple of methods in List<T> and Array for example. In .NET 3.5, Lambda expressions were introduced, which are actually Func<T, U> (and variants) implementations. The .NET 3.5 compilers will compile a lambda expression to a Predicate<T> if the method requires a Predicate<T>, as both are under the surface simply delegates. EntityView2 has a couple of constructors which accept a Predicate<T>. This allows you to specify a lambda expression in .NET 3.5 to filter the entity collection, or if you're on .NET 2.0/3.0, you can use a delegate which compiles to Predicate<T>. The example below filters the passed in collection of CustomerEntity instances on the Country property:
C# VB.NET

EntityView<CustomerEntity> customersFromGermany = new EntityView<CustomerEntity>(customers, c=>c.Country=="Germany"); Dim customersFromGermany = _ New EntityView(Of CustomerEntity)(customers, Function(c) c.Country="Germany" ) Using the DelegatePredicate<T>, a developer can also use a Predicate<T> delegate or Lambda expression to filter the EntityView2 instance after it's been created:
C# VB.NET

Page 346

EntityView<CustomerEntity> customersFromGermany = new EntityView<CustomerEntity>(customers); customersFromGermany.Filter = new DelegatePredicate<CustomerEntity>(c=>c.Country=="Germa ny"); Dim customersFromGermany = _ New EntityView(Of CustomerEntity)(customers) customersFromGermany.Filter = New DelegatePredicate(Of CustomerEntity)(Function(c) c.Cou ntry="Germany")

Multi-clause sorting
Entity collection classes by themselves offer a Sort() method, which is there for backwards compatibility and was used in previous versions by the IBindingList.ApplySort() method. The Sort() method however has one drawback: it can only sort on a single field or property. What if you want to sort on multiple fields? As the EntityView allows you to sort the data using a SortExpression, you can specify as much fields as you want. Let's sort the customerView on City ascending and on CompanyName descending:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# ISortExpression sorter = new SortExpression(CustomerFields.City | SortOperator.Ascending ); sorter.Add(CustomerFields.CompanyName | SortOperator.Descending); customerView.Sorter = sorter; ' VB.NET .NET 1.x Dim sorter As New SortExpression(New SortClause(CustomerFields.City, SortOperator.Ascend ing)) sorter.Add(New SortClause(CustomerFields.CompanyName, SortOperator.Descending)) customerView.Sorter = sorter ' VB.NET .NET 2.0 Dim sorter As New SortExpression(CustomerFields.City Or SortOperator.Ascending) sorter.Add(CustomerFields.CompanyName Or SortOperator.Descending) customerView.Sorter = sorter What if you want to sort on a property of an entity, which isn't an entity field? After all, Sort() allows you to do that. This is also possible: to specify a property, you've to use the class EntityProperty instead of an entity field. So if you instead of sorting on CompanyName, want to sort on the entity property IsDirty , to get all the changed entities first, and then the non-changed entities, you've to use this code instead:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# ISortExpression sorter = new SortExpression(CustomerFields.City | SortOperator.Ascending ); sorter.Add(new EntityProperty("IsDirty") | SortOperator.Ascending); customerView.Sorter = sorter; ' VB.NET .NET 1.x Dim sorter As New SortExpression(New SortClause(CustomerFields.City, SortOperator.Ascend ing)) sorter.Add(New SortClause(New EntityProperty("IsDirty"), SortOperator.Ascending)) customerView.Sorter = sorter ' VB.NET .NET 2.0

Page 347

Dim sorter As New SortExpression(CustomerFields.City Or SortOperator.Ascending) sorter.Add(New EntityProperty("IsDirty") Or SortOperator.Descending) customerView.Sorter = sorter EntityProperty is usable in any construct which works with an entityfield, as long as it's in-memory sorting or filtering. Below you'll learn how to filter an EntityView's data using an entity property.

Filtering using multiple predicates
As a PredicateExpression derives from Predicate, you can also use a PredicateExpression to filter using multiple predicates. There's a limitation however: not all predicate classes are usable for in-memory filtering: please consult the section Generated code - The predicate system which classes are usable and with which specifics. The filtering is also focussed on the entities inside the related entity collection, not on entities inside those entities. This thus means you can't specify a RelationCollection for example to filter all Customers who have an Order from last May. To filter the customers collection on all customers from the UK which entities have been changed, use the following code:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# IPredicateExpression filter = new PredicateExpression(CustomerFields.Country == "UK"); filter.AddWithAnd(new EntityProperty("IsDirty") == true); customerView.Filter = filter; ' VB.NET .NET 1.x Dim filter As New PredicateExpression() filter.Add(New FieldCompareValuePredicate(CustomerFields.Country, ComparisonOperator.Equ al, "UK")) filter.AddWithAnd(New FieldCompareValuePredicate(New EntityProperty("IsDirty"), Comparis onOperator.Equal, True)) customerView.Filter = filter ' VB.NET .NET 2.0 Dim filter As New PredicateExpression(CustomerFields.Country = "UK") filter.AddWithAnd(New EntityProperty("IsDirty") = True) customerView.Filter = filter

View behavior on collection changes
When an entity changes in the related entity collection of the EntityView, it can be the entity doesn't match anymore with the filter set for the view and the EntityView therefore removes the entity from itself: it's no longer available to you through the EntityView. This can be confusing so it is definable what the EntityView should do when the data inside the related entity collection changes. This is done by specifying a PostCollectionChangeAction value with the EntityView constructor or by setting the EntityView's DataChangeAction property. The following list describes the various values and their result on the EntityView's behavior: NoAction (do nothing), i.e.: don't re-apply the filter nor the sorter. ReapplyFilterAndSorter (default). Reapplies the filter and sorter on the collection. ReapplySorter. Reapplies the sorter on the collection, not the filter. By default, the EntityView will re-apply the filter and sorter. There's no setting for just the filter, as reapplying the filter could alter the set, which could change the order of the data as in: it's no longer ordered and has to be re-sorted. If the related collection fires a reset event (when it is sorted using its own code or cleared), the view is also reset and filters are re-applied as well as sorters. If a new entity is added to the collection through code, it is not added to the view in NoAction mode or in ReapplySorter mode, because no filter is re-applyed. If it's added through databinding, it actually is added to the view, as it is added through the EntityView, because an entity collection is bound to a bound control

Page 348

via an EntityView, either an EntityView object you created and bound directly or through the EntityView object returned by the entity collection's DefaultView property.

Projecting data inside an EntityView on another data-structure
A powerful feature of the EntityView class is the ability to project the data in the EntityView onto a new data-structure, like an entity collection, datatable or even custom classes (.NET 2.0 only). Projections are a way to produce custom lists of data ('dynamic lists in memory') based on the current data in the EntityView and a collection of projection objects . Projection objects are small objects which specify which entity field or entity property should be used in the projection and where to get the value from. For example, because the raw projection data can be used to re-instantiate new entities, the data can be used to produce a new entity collection with new entities. How the data is projected depends on the projection engine used for the actual projection. For more information about projections please also see: LLBLGen Pro - Fetching DataReaders and projections . Projections are performed by applying a set of projection objects onto an entity and then by passing on the result data array for further storage to a projection engine, or projector, the projected data is placed in a new instance of a class, for example an entity class, but this can also be a DataRow or a custom class. (Projections on custom classes are only supported on .NET 2.0). The array is an array of type object. You can use filters during the projection as well, to limit the set of data you want to project from the EntityView data. In .NET 1.x, you've to use ArrayList objects to provide the projector objects. In .NET 2.0, you can use the generic List(Of T) class.

Projection objects: EntityPropertyProjector
A projection object is an instance of the EntityPropertyProjector class. As EntityView objects contain Entity objects, this is the projection object you should use. LLBLGen Pro supports other projection objects as well, for general purpose projections as discussed in Fetching DataReaders and projections , however these aren't usable with EntityViews. An EntityPropertyProjector instance contains at most two IEntityFieldCore instances (for example normal EntityField objects or an EntityProperty object) and a Predicate, for example a FieldCompareValuePredicate, or a PredicateExpression. The first IEntityFieldCore instance is mandatory. This is the default value. If a Predicate is specified (optional), and it resolves to true, the default value (thus the first IEntityFieldCore) is used, otherwise the second IEntityFieldCore instance. This way you can select per entity from two fields, for example SomeEntity.Name1 and SomeEntity.Name2, based on the predicate specified either the value of field Name1 of the entity, if the predicate resolves to true, otherwise the value of Name2. The EntityPropertyProjector also contains a Name property which is used to produce the name of the result field. The projection routine used is free to use this name for column purpose (projection onto a datatable) but can also use it for entity field setting (projection onto an entity). If a developer wants to execute a piece of code onto the value prior to storing it into the projected slot, the developer can derive his own class from EntityPropertyProjector and override ValuePostProcess(). This routine is normally empty and expects the value and the entity being processed. It all might sound a little complex, but it's fairly straigt forward, as will be shown in a couple of examples below. Projecting an EntityView's data is done by the CreateProjection routine of an EntityView object. LLBLGen Pro comes with three different projection engines: one for projecting data onto a DataTable (the class DataProjectorToDataTable), one for projecting data onto an entity collection (the class DataProjectorToEntityCollection) and on .NET 2.0 also one for projecting data onto a list of custom classes (the class DataProjectorToCustomClass). You can write your own projection engine as well: simply implement the interface IEntityDataProjector to be able to use the engine in projections of EntityView data. If you also want to use the same engine in projections of resultsets as discussed in Fetching DataReaders and projections , you also should implement the almost similar interface IGeneralDataProjector. Because the interfaces can re-use the actual projection engine logic, it's easy to re-use projection code for both projection mechanisms. Only the data which is available to you through the EntityView can possibly be projected. You can't project nested data inside entities nor entity data not in the EntityView. In that case, create a new EntityView on the same entity collection using a different filter and project that EntityView object instead. Creating EntityPropertyProjector instances for all entity fields. Sometimes you want to project all fields of a given entity and it can be cumbersome to create a lot of EntityPropertyProjector objects if your entity has a lot of fields. Instead, you can use the shortcut method on

Page 349

EntityFields2 : EntityFields2.ConvertToProjectors( EntityFieldsFactory.CreateEntityFieldsObject(EntityType.entityname Entity)) This method will return List of IEntityPropertyProjector objects, one for each entity field of the specified entity type.

Examples of EntityView projections
Projection to datatable.
C# VB.NET

// C# CustomerCollection customers = new CustomerCollection(); customers.GetMulti(null); // fetch all customers // create a view of all customers in germany EntityView customersInGermanyView = new EntityView( customers, (CustomerFields.Country == "Germany"), null ); // create projection of these customers of just the city and the customerid. // for that, define 2 propertyprojectors, one for each field to project ArrayList propertyProjectors= new ArrayList(); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.City, "City" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ); DataTable projectionResults = new DataTable(); // create the actual projection. customersInGermanyView.CreateProjection( propertyProjectors, projectionResults ); ' VB.NET Dim customers As New CustomerCollection() customers.GetMulti(Nothing) ' fetch all customers ' create a view of all customers in germany Dim customersInGermanyView As New EntityView( customers, _ New FieldCompareValuePredicate(CustomerFields.Country, ComparisonOperator.Equal, "Germa ny"), Nothing) ' create projection of these customers of just the city and the customerid. ' for that, define 2 propertyprojectors, one for each field to project Dim propertyProjectors As New ArrayList() propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.City, "City" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ) Dim projectionResults As New DataTable() ' create the actual projection. customersInGermanyView.CreateProjection( propertyProjectors, projectionResults ) After this code, the datatable projectionResults contains two columns, City and CustomerID, and it contains the data for the fields City and CustomerId of each entity in the EntityView, which are all entities with Country equal to "Germany". Projection to entity collection The following example performs a projection onto an entity collection. It uses the entities from Concepts Entity inheritance and relational models , where Clerk is another subtype of Employee.
C# VB.NET

// C# // fetch all managers ManagerCollection managers = new ManagerCollection(); managers.GetMulti(null); // now project them onto 2 new clerk entities, by just projecting the employee fields

Page 350

ArrayList propertyProjectors = new ArrayList(); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.Id, "Id" ) ); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.Name, "Name" ) ); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.StartDate, "StartDat e" ) ); propertyProjectors.Add( new EntityPropertyProjector( EmployeeFields.WorksForDepartmentId , "WorksForDepartmentId" ) ); ClerkCollection clerks = new ClerkCollection(); EntityView managersView = managers.DefaultView; // project data to transform all managers into clerks. ;) managersView.CreateProjection( propertyProjectors, clerks ); ' VB.NET ' fetch all managers Dim managers As New ManagerCollection() managers.GetMulti(Nothing) ' now project them onto 2 new clerk entities, by just projecting the employee fields Dim propertyProjectors As New ArrayList() propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.Id, "Id" ) ) propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.Name, "Name" ) ) propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.StartDate, "StartDat e" ) ) propertyProjectors.Add( New EntityPropertyProjector( EmployeeFields.WorksForDepartmentId , "WorksForDepartmentId" ) ) Dim clerks As New ClerkCollection() Dim managersView As EntityView = managers.DefaultView ' project data to transform all managers into clerks. ;) managersView.CreateProjection( propertyProjectors, clerks ) After this code, the collection clerks contains ClerkEntity instances with only the EmployeeEntity fields (inherited by ClerkEntity from its base type EmployeeEntity, which is also the base type of ManagerEntity) filled with data. .NET 2.0: projection to custom classes This code is .NET 2.0 or higher, due to the generics used in the DataProjectorToCustomClass projector engine. With some reflection, it is possible to create such a class for .NET 1.x, though the class itself has to be setup a little different. The code below also shows how to use the projectors in .NET 2.0. It uses the class TestCustomer which is given below the projection example code (in C#). The projection also shows how to project a property of an entity which isn't an entity field, namely IsDirty, using the EntityProperty class.
C#, NET 2.0 VB.NET, NET 2.0

// C# .NET 2.0 CustomerCollection customers = new CustomerCollection(); customers.GetMulti(null); EntityView<CustomerEntity> allCustomersView = customers.DefaultView; List<TestCustomer> customCustomers = new List<TestCustomer>(); DataProjectorToCustomClass<TestCustomer> customClassProjector = new DataProjectorToCustomClass<TestCustomer>( customCustomers ); List<IEntityPropertyProjector> propertyProjectors = new List<IEntityPropertyProjector>() ; propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.City, "City" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CompanyName, "Compan yName" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.Country, "Country" ) ); propertyProjectors.Add( new EntityPropertyProjector( new EntityProperty("IsDirty"), "IsD irty" ) );

Page 351

// create the projection allCustomersView.CreateProjection( propertyProjectors, customClassProjector ); ' VB.NET .NET 2.0 Dim customers As New CustomerCollection() customers.GetMulti(Nothing) Dim allCustomersView As EntityView(Of CustomerEntity) = customers.DefaultView Dim customCustomers As New List(Of TestCustomer)() Dim customClassProjector As New DataProjectorToCustomClass(Of TestCustomer)( customCusto mers ) Dim propertyProjectors As New List(Of IEntityPropertyProjector)() propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.CustomerId, "Custome rID" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.City, "City" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.CompanyName, "Compan yName" ) ) propertyProjectors.Add( New EntityPropertyProjector( CustomerFields.Country, "Country" ) ) propertyProjectors.Add( New EntityPropertyProjector( new EntityProperty("IsDirty"), "IsD irty" ) ) ' create the projection allCustomersView.CreateProjection( propertyProjectors, customClassProjector ) The custom class, TestCustomer:

/// /// Test class for projection of fetched entities onto custom classes using a custom proj ector. /// public class TestCustomer { #region Class Member Declarations private string _customerID, _companyName, _city, _country; private bool _isDirty; #endregion public TestCustomer() { _city = string.Empty; _companyName = string.Empty; _customerID = string.Empty; _country = string.Empty; _isDirty = false; } #region Class Property Declarations public string CustomerID { get { return _customerID; } set { _customerID = value; } } public string City { get { return _city; } set { _city = value; } } public string CompanyName

Page 352

{ get { return _companyName; } set { _companyName = value; } } public string Country { get { return _country; } set { _country = value; } } public bool IsDirty { get { return _isDirty; } set { _isDirty = value; } } #endregion }

Distinct projections.
It can be helpful to have distinct projections: no duplicate data in the projection results. Distinct projections are supported, as the following example will show. Creating a distinct projection is simply passing false / False for allowDuplicates in the CreateProjection method. The following example shows a couple of projection related aspects: it filters the entity view's data using a Like predicate prior to projecting data, so you can limit the data inside an EntityView used for the projection, and it shows an example how a predicate is used to choose between two values in an entity to determine the end result of projecting an entity. The example uses Northwind like most examples in this documentation. The code contains Assert statements, which are left to show you how many elements to expect at that point in the routine.

CustomerCollection customers = new CustomerCollection(); customers.GetMulti(null); EntityView customersInGermanyView = new EntityView( customers, (CustomerFields.Country == "Germany"), null ); Assert.AreEqual( 11, customersInGermanyView.Count ); // create straight forward projection of these customers of just the city and the custome rid. ArrayList propertyProjectors= new ArrayList(); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.City, "City" ) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Customer ID" ) ); DataTable projection = new DataTable(); customersInGermanyView.CreateProjection( propertyProjectors, projection ); Assert.AreEqual( 11, projection.Rows.Count ); // do distinct filtering during the following projection. It projects ContactTitle and Is New propertyProjectors = new ArrayList(); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.ContactTitle, "Contac t title" ) ); // any entity property can be used for projection source. propertyProjectors.Add( new EntityPropertyProjector( new EntityProperty( "IsNew" ), "Is n ew" ) ); projection = new DataTable(); customersInGermanyView.CreateProjection( propertyProjectors, projection, false ); Assert.AreEqual( 7, projection.Rows.Count );

Page 353

// do distinct filtering and filter the set to project. Re-use previous property projecto rs. // 3 rows match the specified filter, distinct filtering makes it 2. projection = new DataTable(); customersInGermanyView.CreateProjection( propertyProjectors, projection, false, (CustomerFields.ContactTitle % "Marketing%") ); Assert.AreEqual( 2, projection.Rows.Count ); // use alternative projection source based on filter. projection = new DataTable(); propertyProjectors = new ArrayList(); // bogus data, but performs what we need: for all contacttitles not matching the filter, CustomerId is used. propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.ContactTitle, "Contact title", (CustomerFields.ContactTitle % "Marketing%"), CustomerFields.CustomerI d) ); propertyProjectors.Add( new EntityPropertyProjector( CustomerFields.CustomerId, "Customer ID" ) ); // create a new projection, with distinct filtering, which gives different results // now, because ContactTitle is now sometimes equal to CustomerId customersInGermanyView.CreateProjection( propertyProjectors, projection, false ); Assert.AreEqual( 11, projection.Rows.Count ); foreach( DataRow row in projection.Rows ) { if( !row["Contact title"].ToString().StartsWith( "Marketing" ) ) { Assert.AreEqual( row["Contact title"], row["CustomerID"] ); } } Aggregates aren't supported in in-memory projections though Expressions are. All expressions are fully evaluated, where '+' operators on strings result in string concatenations. The new DbFunctionCall object to call database functions inside an Expression object is ignored during expression evaluation.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 354

Generated code - Using the context, SelfServicing
Preface
Selfservicing and Adapter both support so called uniquing contexts. These contexts, implemented in the Context class, available in the ORM Support classes, represent a semantic context in your program. Within such a semantic context, the representing Context object, assures that an entity loaded by the framework is loaded in just one object. This is required for for example refetching trees of objects using prefetch paths, or for usage where more than one object with the same data is problematic. This section discusses the Context object more in detail. It is not necessary to use a Context object in your application, for example in stateless environments like ASP.NET, it's not of much use. Though it can sometimes be required to have just one entity class instance with a given entity's data in a given semantic context, for example in an edit form in a windows forms application.

The Context class
Context objects have to be created by the developer and live as long as the developer wants and keep objects in their cache as long as the Context objects live. A developer can create multiple Context objects to create different semantic contects in which entity objects are unique. This can help when the developer doesn't want two screens with the same object listed to assure that editing the entity on one screen doesn't automatically alter the other instance as well (because the user can click Cancel for example). The Context class works with instances of entity classes. This means that when you pass an entity instance using remoting or webservices to a server which then returns it back after processing, you won't reference the same instance of the entity you sent to the server. This is because of the serialization and deserialization which takes place during remoting. The Context class identifies entities using the PK field values. This means that new entities aren't directly added to the Context's internal object cache, as for example Identity columns won't have a value for the PK field until they're saved and the transaction has been committed. When an entity is saved and the transaction is committed (if any), the entity is added to the Context's cache if the entity is new. A non-new entity object which is added to a Context object, is directly added to the Context's internal object cache, if the entity object hasn't been added to another Context already. Entity objects can be added to the Context at any time, as well as Entity collection objects. When an entity object is added to a Context, its internally referenced related entity objects and entity collection objects are added to the same Context object as well. When an entity collection is added to a Context object, all its entity objects will be added to the Context and every entity object added to the entity collection after that point will be added to the same Context object the collection is added to as well, which makes it easy to work with a Context object, as it is mostly transparent. The Context objects don't act as a cache which is used to prevent database activity. Every query is still executed on the database. If an entity is already loaded in the used Context object, the entity data is not added to a new entity object, but the entity object already loaded is updated. If the already loaded object is dirty, the data isn't updated and the loaded entity data is simply skipped, and the already loaded entity object is returned as is. This is done in the Get routine of the Context object. The Context has a flag to disallow this particular action: SetExistingEntityFieldsInGet . See the LLBLGen Pro reference manual for details on this flag. You can of course use the Context as an object cache for single object fetches, though keep in mind that a Context object is simply acting as a unique instance supplier, it doesn't fetch objects from the database, so if you request an entity instance from a Context object using Get and the Context object can't find it in its cache, you have to test if the returned object is indeed a fetched entity object or a new entity object. The Context objects don't fetch data for themselves to keep the fetch logic placed at just a few known places, to avoid fragmentation of this logic which could blur the overview where the code actually performs fetch activity. Adding an entity object which is already present in the Context is a no-op, as well as when an entity object is already part of another Context object. After an entity is added to a Context object, and a 1:1/m:1

Page 355

reference is set to an entity class instance, the related entity is not added to the context automatically, this has to be done manually by the developer, though when an entity is added to a collection which is added to a context, the entity is added to that context as well. The Context object an entity is added to is returned by the entity's ActiveContext property. When an entity is deleted, the status of the entity is set to Deleted by the delete routines. The Context.Get method will remove an entity from the store if the entity is deleted and not participating in a transaction. Till then, the entity is kept in the Context's object cache.

Using the Context class
The Context class should be seen as a convenience providing class for uniquing within a semantic context. It shouldn't be confused with a UnitOfWork + Object Fetch object, because it leaves that functionality to other objects and methods.

Retrieving instances from a Context
A Context object supplies a Get method which offers different ways to retrieve the already loaded instance for a given entity. As a Context object uses the value(s) of the PK field(s), you can use this to retrieve the unique instance. Below are the different ways illustrated: it will try to retrieve the instance which already contains the entity data for the customer with CustomerID "CHOPS".
C# VB.NET

// C# // using a factory CustomerEntity c = (CustomerEntity)myContext.Get(new CustomerEntityFactory(), "CHOPS"); // using a fetched entity CustomerEntity c = new CustomerEntity("CHOPS"); c = (CustomerEntity)myContext.Get(c); ' VB.NET ' using a factory Dim c As CustomerEntity = CType(myContext.Get(new CustomerEntityFactory(), "CHOPS"), Cus tomerEntity) ' using a fetched entity Dim c As New CustomerEntity("CHOPS") c = CType(myContext.Get(c), CustomerEntity)

Single entity fetches
Creating an entity with the constructor (indirect fetch) means there is no way the context can supply a unique instance of the entity. To be able to do that use this instead:
C# VB.NET

// C# CustomerEntity c = (CustomerEntity)myContext.Get(new CustomerEntity("CHOPS")); ' VB.NET Dim c As CustomerEntity = CType(myContext.Get(new CustomerEntity("CHOPS")), CustomerEnti ty) This will fetch customer "CHOPS" from the database but the context will check if the entity is already loaded in this context. If so, it will return that instance, not the newly created instance. If the entity object isn't known by the Context, it is added to the Context and the Context returns the instance created in the Get() method call. Entities can also be added manually first and then fetched:

Page 356

C# VB.NET

// C# CustomerEntity c = new CustomerEntity(); myContext.Add(c); c.FetchUsingPK("CHOPS"); ' VB.NET Dim c A New CustomerEntity() myContext.Add(c) c.FetchUsingPK("CHOPS") Or, using a unique constraint:
C# VB.NET

// C# CustomerEntity c = new CustomerEntity(); myContext.Add(c); c.FetchUsingUCCompanyName("Foo Inc."); ' VB.NET Dim c A New CustomerEntity() myContext.Add(c) c.FetchUsingUCCompanyName("Foo Inc.") Though it has to be understood that the actual instance 'c' is only unique if the particular entity hasn't been loaded yet. This is due to the c = new CustomerEntity() line. Fetching using unique constraints is a bit problematic in this case. To avoid that you can do:
C# VB.NET

// C# CustomerEntity c = new CustomerEntity(); c.FetchUsingUCCompanyName("Foo Inc."); // fetch. c = (CustomerEntity)myContext.Get(c); // get unique version. No db activity. ' VB.NET Dim c As New CustomerEntity() c.FetchUsingUCCompanyName("Foo Inc.") ' fetch. c = CType(myContext.Get(c), CustomerEntity) ' get unique version. No db activity.

Prefetch Path fetches
Fetching an entity and using a prefetch path use either FetchUsingPK() or FetchUsingUCFieldnames (). Both have an overload which accepts a Context object. If you're fetching a graph and you want to have for every already loaded entity in a particular Context the instance in which the entity is already loaded, you can pass in the context in which these entity objects already are added to. The fetch logic will then build the object graph using the instances from the passed in Context, otherwise it will read the entity data in newly created entity objects.

Note : When fetching an entity collection, you've to add the collection to fetch to the context object and then call the GetMulti method

Page 357

Entity Save calls
When Save() is called on an entity, the base class' save routine will signal the Context it is in (if any), that the entity in question is saved and if the entity is new, it should be moved to the normal object cache inside the Context, if the entity is not in a transaction. If the entity is in a transaction, this activity is performed after the transaction is committed, in the entity's base class transaction commit routine. This is done to prevent that the Context object will move a new entity to the object cache even though the transaction rolled back. If a recursive save saves an entity which is not yet in the active context, the entity is added to the active context.

Multi-entity activity
Actions on entity collections work inside the active context if the collection is first added to a context. All persistence logic will re-use objects from the Context object if the entity collection used is added to a context. SaveMulti() will first add any entities saved to the context the collection is in, if the entity isn't already in the context. s

Remarks
PK values shouldn't be changed. The context relies on non-changing PK values. A Context shouldn't be used as a cache, nor should it kept alive for a long time, just long enough for the semantic context to use unique objects in. Deleted entities which are deleted in the database directly are not picked up by the Context. This is something the developer has to take into account when deleting entities directly. As the Context class doesn't use any locking mechanism, the Context object isn't thread-safe and should be used for single-thread semantic contexts
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 358

Generated code - Using the typed view classes, SelfServicing
Preface
LLBLGen Pro supports read-only lists based on a database view (Typed View) or a selection of entity fields from one or more entities with a relation (not m:n) (Typed List). Although both elements are different, they both will be generated as a typed DataTable, which is a variant of the typed DataSet concept included in Visual Studio.NET. A typed DataTable is a class which derives from the .NET DataTable and defines properties and a row class to access the individual fields in a typed fashion. In this section the typed view classes are briefly discussed and their usage is illustrated using examples.

Instantiating and using a Typed View
As described in the concepts , a Typed View definition is a 1:1 mapping of a database view on an element in an LLBLGen Pro project. A typed view contains, for each database view column, a field with the same name or the name you gave it. When the typed view element is generated into code, it will end up as a class derived from DataTable, a typed DataTable to be exact, which is usable as a read-only list. The code further sports filter functionality which will be included in the query on the database view so you can limit the number of rows based on the filter you specify, using standard filtering techniques for LLBLGen Pro: Predicate Expressions. (See: Getting started with filtering ). Because the typed DataTable is derived from the .NET DataTable class, it can be used to create DataView objects as well, in which you can specify additional filtering, sorting, calculations etc. As an illustration, we'll include the view 'Invoices' from the Northwind database, and use the Typed View 'Invoices' for the code examples.

Instantiating and filling a Typed View
To create an instance of the 'Invoices' typed view and fill it with all the data in the view, the following code is sufficient:
C# VB.NET

// [C#] InvoicesTypedView invoices = new InvoicesTypedView(); invoices.Fill(); ' [VB.NET] Dim invoices As New InvoicesTypedView() invoices.Fill() The rows will be added as they are received from the database provider, no sorting nor filtering will be applied. Furthermore, all rows in the view are read, which is probably not what you want. Let's filter on the rows, so the Fill() method will only return those rows with an OrderID larger than 11000:
C# VB.NET

// [C#] InvoicesTypedView invoices = new InvoicesTypedView(); IPredicateExpression invoicesFilter = new PredicateExpression(InvoicesFields.OrderID > 1 1000)); invoices.Fill(0, null, true, invoicesFilter);

Page 359

' [VB.NET] Dim invoices As New InvoicesTypedView() Dim invoicesFilter as IPredicateExpression = New PredicateExpression( _ New FieldCompareValuePredicate(InvoicesFields.OrderID, ComparisonOperator.GreaterThan, 11000)) invoices.Fill(0, Nothing, True, invoicesFilter) The overloaded Fill() version that accepts a filter, also accepts other parameters: maxNumberOfItemsToReturn, which is used to limit the number of rows returned. When this parameter is set to 0, it is ignored (all rows are returned) sortClauses, which is a collection of SortClause objects and used to sort the rows in the table before they're added to the Typed View object. When this parameter is set to null, no sorting is performed. allowDuplicates. Most views don't contain duplicate rows, but if they do, you can filter them out using this setting. Specifying a filter will narrow down the number of rows to the ones matching the filter. The filter can be as complex as you want. See for filtering information and how to set up sorting clauses Getting started with filtering and Sorting .

Reading a value from a filled Typed View
After we've filled the typed view object, we can use the values read. As said, Typed View and Typed List objects in the form of a typed DataTable are read-only , and therefore extremely handy for filling lists on the screen or website, but not usable for data manipulation. For modifying data you should use the entity classes/collection classes. Below, we'll read a given value from row 0, the value for the Sales person. We assume the invoices object is filled with data using any of the previously mentioned ways to do so.
C# VB.NET

// [C#] string salesPerson = invoices[0].Salesperson; ' [VB.NET] Dim salesPerson As String = invoices(0).Salesperson That's it. The '0' points to the row, and the row is 'typed', thus has named properties for the individual columns in the object; you can just read the value using a property. Null values Because the TypedView (and TypedList) classes are derived classes from DataTable, the underlying DataTable cells still contain System.DBNull.Value values if the field in the database is NULL. You can test for NULL by using the generated methods IsFieldName Null(). When reading a field which value is System.DBNull.Value in code, like the example above, will result in the default value for the type of the field, as defined in the TypeDefaultValue class. Databinding will result in the usage of a DataView, as that's build into the DataTable, which will then return the System.DBNull.Value values and not the TypeDefaultValue values.

Limiting and sorting a typed view
To sort the data in the typed view, we're not actually sorting the data in the object, but sorting the data before it is read into the object, thus using a sort operator in the actual SQL query. To do that, you specify a set of SortClauses to the Fill() method. Below is illustrated the sorting of the invoices typed view on the field 'ExtendedPrice' in descending order. Sortclauses are easily created using the SortClause factory in the generated code. We pass the same filter as mentioned earlier.
C# VB.NET

// [C#] invoices.Clear(); // clear al current data ISortExpression sorterInvoices = new SortExpression(InvoicesFields.ExtendedPrice | SortO

Page 360

perator.Descending); invoices.Fill(0, sorterInvoices, true, invoicesFilter); ' [VB.NET] invoices.Clear() ' clear al current data Dim sorterInvoices As ISortExpression = New SortExpression( _ New SortClause(InvoicesFields.ExtendedPrice, SortOperator.Descending)) invoices.Fill(0, sorterInvoices, True, invoicesFilter) The rows are now sorted on the ExtendedPrice field, in descending order.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 361

Generated code - Using the typed list classes, SelfServicing
Preface
LLBLGen Pro supports read-only lists based on a database view (Typed View) or a selection of entity fields from one or more entities with a relation (not m:n) (Typed List). Although both elements are different, they both will be generated as a typed DataTable, which is a variant of the typed DataSet concept included in Visual Studio.NET. A typed DataTable is a class which derives from the .NET DataTable and defines properties and a row class to access the individual fields in a typed fashion. In this section the typed list classes are briefly discussed and their usage is illustrated using examples. It's recommended you read the Using the typed view classes, SelfServicing section as well, as it discusses constructs which are shared with the Typed List.

Instantiating and using a Typed List
Using a Typed List is similar to using a Typed View, both will be generated as typed DataTables. There is a difference however in how you formulate filters and sortclauses for a typed list. The reason for this is that a typed list is constructed using existing entity fields, while a typed view uses its own field definitions. When you want to construct a filter for a typed list, you have to specify field indexes of fields in the entities which are the base of the typed list ; you can filter on fields which are not included in the typed list itself as in the column set of the typed list. As an example of this, we construct a typed list from the entities Customer and Order and include the following fields in the resultset: Order.OrderID, Order.OrderDate, Order.ShippedDate, Customer.CustomerID, Customer.CompanyName. (Do this by checking these fields in the field list in the typed list editor). You can now filter on any field in Order or Customer or both. Also, you can sort on any field in Order or Customer or both. Let's filter this typed list on all orders from customers from 'Brazil', and sort the list on the field Order.Freight, ascending. The typed list is called OrderCustomer.

C# VB.NET

// [C#] OrderCustomerTypedList orderCustomer = new OrderCustomerTypedList(); IPredicateExpression filter = new PredicateExpression(CustomerFields.Country == "Brazil" ); ISortExpression sorter = new SortExpression(OrderFields.Freight | SortOperator.Ascending ); // Set allowDuplicates to true, because we sort on a field that is not in our resultset and we use SqlServer. orderCustomer.Fill(0, sorter, true, filter); ' [VB.NET] Dim orderCustomer As New OrderCustomerTypedList() Dim filter As IPredicateExpression = New PredicateExpression( _ New FieldCompareValuePredicate(CustomerFields.Country, ComparisonOperator.Equal, "Brazi l")) Dim sorter As ISortExpression = New SortExpression( _ New SortClause(OrderFields.Freight, SortOperator.Ascending)) ' Set allowDuplicates to true, because we sort on a field that is not in our resultset a

Page 362

nd we use SqlServer. orderCustomer.Fill(0, sorter, True, filter) The Typed List object is now filled with the rows for the 5 columns we've specified in the Typed List editor, sorted on Order.Freight ascending and filtered on Customer.Country equals "Brazil".

Note : TypedLists still offer the functionality of Weak Relations through the property ObeyWeakRelations. (See for a description of weak relations this section in Filtering and Sorting ). It's however recommended to use the JoinHint specifications for the relations in the TypedList editor in the LLBLGen Pro Designer.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 363

Generated code - Using dynamic lists, SelfServicing
Preface
LLBLGen Pro offers you to create lists in code, without the necessity of the designer. This sometimes can become handy if you just want to pull a small list of data from the database without having to re-generate the code again. The following paragraph discusses briefly how to create a dynamic list in code. Dynamic lists are using the same building blocks as Typed View and Typed List classes use and can be used with normal filters and other constructs like group by.

LLBLGen Pro v2.0's different ResultsetFields classes
In LLBLGen Pro v1.0.2005.1 and earlier, the generated ResultsetFields class in the HelperClasses namespace contained a lot of overloads of the DefineField() method. By default, LLBLGen Pro still generates this code, however also offers a new way to define fields in a ResultsetFields object which is more efficient, namely instead of using the FieldIndex enums, it uses the generated entityname Fields classes. If you're creating a new project with LLBLGen Pro you can avoid having all the DefineField overloads in the ResultsetFields class, by moving the BackwardsCompatibility templatebindings to the bottom of the list in the Generator configuration dialog. Please see: Designer - Generating code . By default the BackwardsCompatibility template bindings take precedence over the more compact newer templates to let people who upgrade existing code to LLBLGen Pro v2.0. The examples in this section use the new approach with DefineFields(field object, ...);

Creating dynamic lists
Typed lists are great, however sometimes you need a small list of data, build from one or more entities and use it in a read-only way, and you don't really need the typed functionality coming with a typed list. After all, a typed list requires you to go into the designer, create the list and re-generate the code. Dynamic lists are based on entity fields, using the similar code as TypedList classes use internally. These lists are loaded into DataTable objects. The following example shows you how to create such a dynamic list. The example uses aggregates and a GroupByCollection to read a custom resultset into a DataTable, fully build with entity fields. Because you can use the same logic Typed Lists use internally, fetching the data is simply calling a method in a class which is already available.
C# VB.NET

// C# ResultsetFields fields = new ResultsetFields(3); fields.DefineField(EmployeeFields.FirstName, 0, "FirstNameManager", "Manager"); fields.DefineField(EmployeeFields.LastName, 1, "LastNameManager", "Manager"); fields.DefineField(EmployeeFields.LastName, 2, "AmountEmployees", "Employee", AggregateF unction.Count); IRelationCollection relations = new RelationCollection(); relations.Add(EmployeeEntity.Relations.EmployeeEntityUsingEmployeeId, "Employee", "Manag er", JoinHint.None); IGroupByCollection groupByClause = new GroupByCollection(); groupByClause.Add(fields[0]); groupByClause.Add(fields[1]); DataTable dynamicList = new DataTable(); TypedListDAO dao = new TypedListDAO();

Page 364

dao.GetMultiAsDataTable(fields, dynamicList, 0, null, null, relations, true, groupByClau se, null, 0, 0); ' VB.NET Dim fields As New ResultsetFields(3) fields.DefineField(EmployeeFields.FirstName, 0, "FirstNameManager", "Manager") fields.DefineField(EmployeeFields.LastName, 1, "LastNameManager", "Manager") fields.DefineField(EmployeeFields.LastName, 2, "AmountEmployees", "Employee", AggregateF unction.Count) Dim relations As IRelationCollection = New RelationCollection() relations.Add(EmployeeEntity.Relations.EmployeeEntityUsingEmployeeId, "Employee", "Manag er", JoinHint.None) Dim groupByClause As IGroupByCollection = New GroupByCollection() groupByClause.Add(fields(0)) groupByClause.Add(fields(1)) Dim dynamicList As New DataTable() Dim dao As New TypedListDAO() dao.GetMultiAsDataTable(fields, dynamicList, 0, Nothing, Nothing, relations, True, group ByClause, Nothing, 0, 0) This list retrieves all managers and the number of employees they manage. Let's walk through the example to make it more understandable. It first creates a list of fields which will form the list. ResultsetFields is a class defined in the HelperClasses namespace in your generated code and which is a class derived from EntityFields, the container for EntityField objects which is also located in every entity: the Fields property. The three lines following the declaration of the fields parameter define the three fields in detail. First, it specifies an entity field, to signal which field we want on that position of the resultset fields, then the index of the field in the resultsetfields object, then the alias for the field in the resultset and optionally (but we join Employee twice so we have to alias) the alias for the entity this field belongs to. The third field is actually the same as the second, Employee.LastName, however has an aggregate function applied to it. LastName is not a numeric field, but the type of the field is not important when an aggregate function is applied, as the field defines a column in the dynamic list and is used as a parameter for the aggregate function; the aggregate function itself, or better the value it produces, is the actual value of the column and the type is determined at runtime. As a DataColumn object can contain any value, this works as planned. As we have to join Employee twice, we have to define a relation collection and add the relation required for the join. The entities in the relation are properly aliased as "Employee" and "Manager" so the generated code knows from which table the fields should be retrieved. As we're going to group by, we define the group by collection and add the fields which participate in the group by in the order in which we want to group. We don't add the third field, as it is an aggregated field which is using the grouped data. After that, the objects are setup to retrieve the data. We use the TypedListDAO class, as that class exposes the functionality to fetch a set of data in a DataTable object, logic which is also used by every TypedList's Fill() method. We could have specified a filter as well, additional relations for the filter, and even paging parameters. This way of creating lists of data is very flexible and can be easily extended with expressions for complex resultsets, for example for usage in reports.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 365

Generated code - using GROUP BY and HAVING clauses, SelfServicing
Preface
This section discusses the usage of GROUP BY and HAVING clauses with typed lists, typed views and dynamic lists. Usage is the same for all three.

Using GroupByCollection and Having Clauses
Data in lists, be it a TypedView, a TypedList or Dynamic List, is often grouped into smaller sets, which are then processed by Aggregate functions. LLBLGen Pro allows you to specify a GroupByCollection when calling a TypedList's or TypedView's Fill() method. The GroupByCollection can contain a custom filter which will be used as a HAVING clause in the query to generate. The filter is a normal PredicateExpression and can contain any predicate you would otherwise use in a normal filter, with one restriction: fields referred to in a Having clause have to be part of the GroupByCollection or have to have an aggregate function applied to them. This is an Sql restriction. For more information about aggregate functions, please see Field expressions and aggregates . To make effective use of a group by action, fields in the resultset should be in the GroupByCollection or have an aggregate function applied to them. Currently you can't apply aggregate functions in the designer, but you can in your code. The example below applies aggregates and expressions to the fields in a typed list called OrderTotals. The TypedList contains just two fields: OrderDetails.OrderId (aliased as "OrderId", at index 0) and OrderDetails.UnitPrice (aliased as "TotalPrice", as it will contain the total price of the order, at index 1). By applying expressions, aggregates and a group by action, together with a having clause, the typed list will contain all orders with a total price higher than $1,000.=. The data also could have been read using a dynamic list, as is illustrated later in this section. The fetch action of the TypedList is not done by calling Fill() but by using a more low level method as Fill() will recreate the fields collection, which is not what we want because we'd lose the expression and aggregate on the second field. You can derive a class from the TypedList class and override BuildResultset(), in which you apply the aggregate and expression objects. Then it is possible to call Fill().
C# VB.NET

// C# OrderTotalsTypedList orderTotals = new OrderTotalsTypedList(); IGroupByCollection groupByClause = new GroupByCollection(); // grab the fields collection. IEntityFields fields = orderTotals.BuildResultset(); groupByClause.Add(fields[0]); // construct Having filter. // Expression for total price: ((unitprice * quantity) - ((unitprice * quantity) * disco unt) ) groupByClause.HavingClause = new PredicateExpression( fields[1] .SetExpression( (OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) ((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) * OrderDetailsFields.Di scount)) .SetAggregateFunction(AggregateFunction.Sum) > 1000.0f); TypedListDAO dao = new TypedListDAO();

Page 366

dao.GetMultiAsDataTable(fields, orderTotals, 0, null, null, orderTotals.BuildRelationSet (), true, groupByClause, null, 0, 0); ' VB.NET .NET 1.x Dim orderTotals As New OrderTotalsTypedList() ' grab the fields collection. Dim fields As IEntityFields = orderTotals.BuildResultset() ' Expression for total price: ((unitprice * quantity) - ((unitprice * quantity) * discou nt) ) Dim productPriceExpression As IExpression = New Expression( _ OrderDetailsFields.UnitPrice, ExOp.Mul, OrderDetailsFields.Quantity) Dim discountExpression As IExpression = New Expression( _ productPriceExpression, ExOp.Mul, OrderDetailsFields.Discount) Dim totalPriceExpression As IExpression = New Expression( _ productPriceExpression, ExOp.Sub, discountExpression) fields(1).ExpressionToApply = totalPriceExpression fields(1).AggregateFunctionToApply = AggregateFunction.Sum Dim groupByClause As IGroupByCollection = New GroupByCollection() groupByClause.Add(fields(0)) Dim havingFilter As IPredicateExpression = New PredicateExpression() havingFilter.Add(New FieldCompareValuePredicate(fields(1), Nothing, ComparisonOperator.G reaterThan, 1000.0F)) groupByClause.HavingClause = havingFilter Dim dao As New TypedListDAO() dao.GetMultiAsDataTable(fields, orderTotals, 0, Nothing, Nothing, orderTotals.BuildRelat ionSet(), _ True, groupByClause, Nothing, 0, 0)

Which is equivalent to (VB.NET .NET 2.0) Dim orderTotals As New OrderTotalsTypedList() ' grab the fields collection. Dim fields As IEntityFields = orderTotals.BuildResultset() Dim groupByClause As IGroupByCollection = New GroupByCollection() groupByClause.Add(fields(0)) ' construct Having filter. ' Expression for total price: ((unitprice * quantity) - ((unitprice * quantity) * discou nt) ) groupByClause.HavingClause = New PredicateExpression( _ fields(1) _ .SetExpression( _ (OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) - _ ((OrderDetailsFields.UnitPrice * OrderDetailsFields.Quantity) * OrderDetailsFields.Di scount)) _ .SetAggregateFunction(AggregateFunction.Sum) _ > 1000.0) Dim dao As New TypedListDAO() dao.GetMultiAsDataTable(fields, orderTotals, 0, Nothing, Nothing, orderTotals.BuildRelat ionSet(), _ True, groupByClause, Nothing, 0, 0) The main part is the creation of the Expression object which will calculate the proper total for an order. As expression objects are re-usable objects, the code might look a little verbose, but can be re-used in your application. The GroupByCollection's HavingClause is set to a FieldCompareValuePredicate object which compares the TotalPrice value with a value of 1000.0. As the field used in the predicate is the same as the field in the resultset, we get the proper expression and aggregate function applied to the field in the Having clause, as the Total price has to be re-calculated in the Having clause.

Page 367

LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 368

Generated code - Calling a stored procedure, SelfServicing
Preface
LLBLGen Pro supports existing stored procedures by offering the ability to define calls to those existing stored procedures. There are two types of stored procedures: procedures which do not return a resultset, called Action Stored Procedures, and procedures which return one or more resultsets, which are called Retrieval Stored Procedures. This section illustrates how call definitions to these stored procedures in your project are generated in code and how you can use them in your code. Each stored procedure call method has one overloaded method which has an extra ref/ByRef parameter, returnValue, which returns the return value of the stored procedure, if that's supported by the target database. (SqlServer for example, supports this).

Retrieval Stored Procedure Calls
When you add a call definition for a retrieval stored procedure, a static/shared method which will call that stored procedure will be added to a class called RetrievalProcedures . When the stored procedure called returns a single resultset (which is the most common approach with SqlServer, with Oracle this would be a stored procedure/function with a single REF CURSOR output parameter), the return value of the generated method will be a DataTable. When the stored procedure returns more than one resultset, the return value of the generated method will be a DataSet, containing each resultset in a separate DataTable. For example, when we include a call definition into our LLBLGen Pro project to the procedure in Northwind called 'CustOrderDetail', taking one parameter, an OrderID, a static method called CustOrderDetail is created, returning a DataTable (because the procedure returns a single resultset) and accepting a single parameter, orderID, which is of type int/Integer because the parameter itself is of type integer. To utilize this method in your own code, you can call it in the following way. Let's pass in the orderID 10254 as parameter value:
C# VB.NET

// [C#] DataTable resultSet = RetrievalProcedures.CustOrderDetail(10254); ' [VB.NET] Dim resultSet As DataTable = RetrievalProcedures.CustOrderDetail(10254) Nothing more is needed. This one line of code will pass 10254 as value to the parameter of the stored procedure CustOrderDetail and will return the result in a DataTable object. When a stored procedure has more than one parameter, all parameters are specified as input parameters for the method calling the stored procedure. Output parameters are also supported. When a stored procedure has an output parameter, a parameter representing the output parameter in the stored procedure is added to the method heading and is defined as 'ref' (C#) or 'ByRef' (VB.NET). Illustrated below is the call to an imaginary stored procedure which returns a datatable, takes 4 input parameters and returns a value in an output parameter:
C# VB.NET

// [C#] int outputValue; DataTable resultSet = RetrievalProcedures.MyStoredProcedure(1, 2, 3, 4, ref outputValue) ;

Page 369

' [VB.NET] Dim outputValue as Integer Dim resultSet As DataTable = RetrievalProcedures.MyStoredProcedure(1, 2, 3, 4, outputVal ue)

Action Stored Procedure Calls
If you have added a call to a procedure to your project which does not return a resultset, the static/shared method is added to the class ActionProcedures . Instead of returning a DataTable or DataSet, a method in this class returns an int/Integer, which represents the return value of the ExecuteNonQuery() method, which is the number of rows affected if the database has row counting enabled (and the stored procedure doesn't switch it off). For the rest, the action stored procedure methods work the same as the retrieval stored procedures mentioned above: input parameters are defined as normal parameters for the method and output parameters are defined as ref/ByRef parameters.

Transaction Support
Both Action and Retrieval Procedures can be called within an active transaction. By using the overload which accepts an ITransaction object, (be it a Transaction or a TransactionComPlus object), you can pass in an existing ITransaction instance which contains a valid ADO.NET connection object which is used when calling the stored procedure. This will make the stored procedure to run inside the transaction controlled by the ITransaction object.

Note : If you pass in a Transaction object, which controls an ADO.NET transaction, keep in mind that that transaction is already a valid database transaction, so when you rollback a transaction inside your stored procedure, you will rollback the entire transaction with that statement, at least on SqlServer.

Wrap call in IRetrievalQuery object
In v2.0, LLBLGen Pro offers you to get the call to a retrieval stored procedure as an IRetrievalQuery object. An IRetrievalQuery object is the query object generated by a Dynamic Query Engine (DQE) and which is executed by the low level fetch logic of LLBLGen Pro's O/R mapper core. The IRetrievalQuery object allows you to fetch a query as a datareader or to project the results of the stored procedure call onto a datastructure of your choice, for example an entity collection. You retrieve an IRetrievalQuery object which wraps the call to a given stored procedure by calling the following generated method (each retrieval stored procedure has such a method generated):
C# VB.NET

// C# IRetrievalQuery procCall = RetrievalProcedures.GetStoredProcedureCallNameCallAsQuery(par ameters); ' VB.NET Dim procCall As IRetrievalQuery = RetrievalProcedures.GetStoredProcedureCallNameCallAsQu ery(parameters) You can then pass the IRetrievalQuery object to the methods for fetching a datareader or fetch a projection. See for more information about fetching a datareader or fetching a projection: LLBLGen Pro - Fetching DataReaders and projections .
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 370

Generated code - Fetching DataReaders and projections, SelfServicing
Preface
LLBLGen Pro v2 introduces two new ways of fetching a resultset: as an open IDataReader object and as a projection . This section discusses both and illustrates both with a couple of examples, either using a stored procedure or a query build using entity fields. Fetching a resultset as an open IDataReader is considered an advanced feature and should be used with care: an open IDataReader object represents an open cursor to data on a connected RDBMS over an open connection. This means that passing the IDataReader around in your application is not recommended. Instead use the IDataReader in the routine you also called the fetch logic to create it and immediately after that make sure the IDataReader gets closed and disposed. This way you're sure you'll free up resources early. To understand projections better, it's recommended to first read the section about fetching an open IDataReader. Another section describing projections, but then related to an entity view object, is Generated code - using the EntityView class .

Fetching a resultset as an open IDataReader
To fetch a resultset as an open IDataReader, you call one of the overloads of GetAsDataReader , a method of the class TypedListDAO . There are two ways to use the GetAsDataReader method: by supplying a ready to use IRetrievalQuery or by specifying a fields list, and various other elements which are required for creating a new query by the Dynamic Query Engine (DQE). The first option, the IRetrievalQuery option, can be used to fetch a retrieval stored procedure as an open IDataReader, by using the RetrievalProcedures.GetStoredProcedureName CallAsQuery() method of the particular stored procedure call. This is a generated method, one for every retrieval stored procedure call known in the LLBLGen Pro project. GetAsDataReader accepts also a parameter called CommandBehavior . This parameter is very important as it controls the behavior the datareader should perform when the datareader is closed. It's required to specify a behavior different than CloseConnection if the fetch is inside a transaction and the connection has to stay open after the datareader has been closed. On SelfServicing, it's especially recommended to set CommandBehavior to CloseConnection, as closing the connection can be a little problematic, because it's abstracted away from you. It's recommended to familiar yourself with the various overloads of the GetAsDataReader method using the LLBLGen Pro reference manual. The method is defined on DaoBase, the base class of the TypedListDAO class. It's possible to construct your own IRetrievalQuery object with your own SQL, by instantiating a new RetrievalQuery object. However in general, it's recommended to use the GetAsDataReader overloads which accept a fieldslist and other elements and let LLBLGen Pro generate the query for you.

Fetching a Retrieval Stored Procedure as an IDataReader
An example of calling a procedure and receive a datareader from it is enlisted below. It calls the Northwind stored procedure CustOrdersOrders which returns a single resultset with 4 fields. The example simply prints the output on the console. The VB.NET example uses a try / finally block as VB.NET for .NET 1.x doesn't support the Using statement. Users of VB.NET for .NET 2.0 can replace the try/finally block with the new Using statement as illustrated in the C# example.
C# VB.NET

// C#

Page 371

TypedListDAO dao = new TypedListDAO(); IDataReader reader = dao.GetAsDataReader(null, RetrievalProcedures.GetCustOrdersOrdersCallAsQuery( "CHOPS" ), CommandBehavior.CloseCon nection ); while( reader.Read() ) { Console.WriteLine( "Row: {0} | {1} | {2} | {3} |", reader.GetValue( 0 ), reader.GetValue( 1 ), reader.GetValue( 2 ), reader.GetValue( 3 ) ); } reader.Close(); ' VB.NET Dim dao As New TypedListDAO() Dim reader As IDataReader = dao.GetAsDataReader(Nothing, _ RetrievalProcedures.GetCustOrdersOrdersCallAsQuery( "CHOPS" ), CommandBehavior.CloseCon nection ) While reader.Read() Console.WriteLine( "Row: {0} | {1} | {2} | {3} |", _ reader.GetValue( 0 ), reader.GetValue( 1 ), reader.GetValue( 2 ), reader.GetValue( 3 ) ) End While reader.Close()

Fetching a Dynamic List as an IDataReader
An example of a dynamic list which is used to receive a datareader from it is enlisted below. The example simply prints the output on the console.
C# VB.NET

// C# ResultsetFields fields = new ResultsetFields( 3 ); // simply set the fields in the indexes, which will use the field name for the column na me fields[0] = CustomerFields.CustomerId; fields[1] = CustomerFields.CompanyName; fields[2] = OrderFields.OrderId; PredicateExpression filter = new PredicateExpression( CustomerFields.Country == "Germany " ); RelationCollection relations = new RelationCollection(); relations.Add( CustomerEntity.Relations.OrderEntityUsingCustomerId ); TypedListDAO dao = new TypedListDAO(); IDataReader reader = dao.GetAsDataReader( null, fields, filter, relations, CommandBehavi or.CloseConnection, 0, true ); while( reader.Read() ) { Console.WriteLine( "Row: {0} | {1} | {2} |", reader.GetValue( 0 ), reader.GetValue( 1 ), reader.GetValue( 2 ) ); } reader.Close(); ' VB.NET Dim fields As New ResultsetFields( 3 ) ' simply set the fields in the indexes, which will use the field name for the column nam e fields(0) = CustomerFields.CustomerId fields(1) = CustomerFields.CompanyName

Page 372

fields(2) = OrderFields.OrderId Dim filter As New PredicateExpression() filter.Add( _ New FieldCompareValuePredicate(CustomerFields.Country, ComparisionOperator.Equal, "Ger many")) Dim relations As New RelationCollection() relations.Add(CustomerEntity.Relations.OrderEntityUsingCustomerId) Dim dao As New TypedListDAO() Dim reader As IDataReader = dao.GetAsDataReader( Nothing, fields, filter, relations, Com mandBehavior.CloseConnection, 0, True ) While reader.Read() Console.WriteLine( "Row: {0} | {1} | {2} |", _ reader.GetValue( 0 ), reader.GetValue( 1 ), reader.GetValue( 2 ) ) End While reader.Close()

Resultset projections
In the previous section we've seen that a query could be fetched as an open IDataReader, where the query could be an IRetrievalQuery object containing a stored procedure call, or a dynamic formulated query from fields, a filter and other elements you might want to use in the query. It is then up to you what to do with the IDataReader. It's likely you'll project the data available to you through the IDataReader object onto a data-structure. Projecting a resultset is a term from the relational algebra, the Wikipedia has a formal explanation of it: Projection (relational algebra) (opens in a new window). It comes down to the fact that you create a new set of data from an existing set of data. The existing set of data is the resultset you want to project. The new set is the projection result. LLBLGen Pro offers two different projection mechanisms: projecting an EntityView (see: Generated code using the EntityView class ) and projecting a fetched resultset, which is discussed here. Both mechanisms are roughly the same, only the source data origin differs and the used interface implemented by the used projection engine. The projections of entity view data are a little more advanced because it's possible to execute in-memory filters on the entity object itself to make a selection which field to project. This means that the projector objects, as discussed in the EntityView Projection documentation, are both implementing the IDataValueProjector, but the projector objects used for EntityView projections also implement the interface derived from IDataValueProjector, IEntityPropertyProjector. For projections of EntityView data, EntityPropertyProjector objects are used, for projections of resultset data, the more simpler DataValueProjector objects are used. Their meaning is roughly the same, so if you're familiar with EntityView projections, you'll directly understand the examples below using DataValueProjector objects. As the projection engine interfaces required for both mechanisms are fairly similar, the shipped projection engines thereby can be used for both mechanisms. Resultset projections are done by an IGeneralDataProjector implementation. IGeneralDataProjector allows an object[] array of values to be projected onto new instances of whatever class is supported by the IGeneralDataProjector implementation, for example new entities or a DataRow in a DataTable. Which values in the object[] array are projected onto which properties of the target element, created by the IGeneralDataProjector implementation, is specified by the specified set of IDataValueProjector implementations passed in. In the following examples you'll see the usage of the projection engines is similar to the usage of the projection engines in the EntityView projection examples. In SelfServicing, the TypedListDAO class, available in the DaoClasses namespace of the generated code, has a method called GetAsProjection with various overloads. This method produces the projection of the resultset defined by the input parameters (similar to the GetAsDataReader method) or the resultset passed in in the form of an open IDataReader object. By which projection engine the projection is performed as well which data is projected is passed in as well. GetAsProjection doesn't return a value, the result is in the projection engine object. This method has similar overloads as GetAsDataReader, though it doesn't accept a commandbehavior: if a connection is open, it leaves it open, if no connection is open, it creates one and closes one afterwards.

Projecting Stored Procedure resultset onto entity collection
For this stored procedure projection example, the following stored proecdure is used: CREATE procedure pr_CustomersAndOrdersOnCountry

Page 373

@country VARCHAR(50) AS SELECT * FROM Customers WHERE Country = @country SELECT * FROM Orders WHERE CustomerID IN ( SELECT CustomerID FROM Customers WHERE Country = @country ) which is a SqlServer stored procedure and which returns 2 resultsets: the first is all customers filtered on a given Country, and the second is all orders of those filtered customers. The stored procedure is fetched as an open IDataReader and both resultsets are projected onto entity collections: the first resultset on a CustomerCollection object and the second on an OrderCollection object. The stored procedure uses a wildcard select list. This is for simplicity. The code below is written using .NET 1.x for clarity. .NET 2.0 users are encouraged to use the generic variants of the discussed classes instead, as discussed also in Generated code - using the EntityView class .
C# VB.NET

// C# CustomerCollection customers = new CustomerCollection(); OrderCollection orders = new OrderCollection(); using( IRetrievalQuery query = RetrievalProcedures.GetCustomersAndOrdersOnCountryCallAsQ uery( "Germany" ) ) { TypedListDAO dao = new TypedListDAO(); using( IDataReader reader = dao.GetAsDataReader(null, query, CommandBehavior.CloseConne ction ) ) { // first resultset: Customers. List<IDataValueProjector> valueProjectors = new List<IDataValueProjector>(); // project value on index 0 in resultset row onto customerid valueProjectors.Add( new DataValueProjector( CustomerFieldIndex.CustomerId.ToString(), 0, typeof( string ) ) ); // project value on index 1 in resultset row onto companyname valueProjectors.Add( new DataValueProjector( CustomerFieldIndex.CompanyName.ToString() , 1, typeof( string ) ) ); // resultset contains more rows, we just project those 2. The rest is trivial. DataProjectorToIEntityCollection projector = new DataProjectorToIEntityCollection( cus tomers ); dao.GetAsProjection( valueProjectors, projector, reader ); // second resultset: Orders. valueProjectors = new List<IDataValueProjector>(); //valueProjectors.Add( new DataValueProjector( OrderFieldIndex.OrderId.ToString(), 0, typeof( int ) ) ); valueProjectors.Add( new DataValueProjector( OrderFieldIndex.CustomerId.ToString(), 1, typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( OrderFieldIndex.OrderDate.ToString(), 3, typeof( DateTime ) ) ); // switch to the next resultset in the datareader reader.NextResult(); projector = new DataProjectorToIEntityCollection( orders ); dao.GetAsProjection( valueProjectors, projector, reader ); reader.Close(); } } ' VB.NET Dim customers As New CustomerCollection()

Page 374

Dim orders As New OrderCollection() Dim query As IRetrievalQuery = RetrievalProcedures.GetCustomersAndOrdersOnCountryCallAsQ uery( "Germany" ) Try Dim dao As New TypedListDAO() Dim reader As IDataReader = dao.GetAsDataReader(Nothing, query, CommandBehavior.CloseCo nnection ) Try ' first resultset: Customers. Dim valueProjectors As New ArrayList() ' project value on index 0 in resultset row onto CustomerId valueProjectors.Add( New DataValueProjector( CustomerFieldIndex.CustomerId.ToString(), 0, GetType( Sring ) ) ) ' project value on index 1 in resultset row onto CompanyName valueProjectors.Add( New DataValueProjector( CustomerFieldIndex.CompanyName.ToString() , 1, GetType( String ) ) ) ' resultset contains more rows, we just project those 2. The rest is trivial. Dim projector As New DataProjectorToIEntityCollection2( customers ) dao.GetAsProjection( valueProjectors, projector, reader ) ' second resultset: Orders. valueProjectors = New ArrayList() valueProjectors.Add( New DataValueProjector( OrderFieldIndex.OrderId.ToString(), 0, Ge tType( Integer ) ) ) valueProjectors.Add( New DataValueProjector( OrderFieldIndex.CustomerId.ToString(), 1, GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( OrderFieldIndex.OrderDate.ToString(), 3, GetType( DateTime ) ) ) ' switch to the next resultset in the datareader reader.NextResult() projector = New DataProjectorToIEntityCollection2( orders ) dao.GetAsProjection( valueProjectors, projector, reader ) reader.Close() Finally reader.Dispose() End Try Finally ' Not really necessary for SqlServer, but is required on Oracle, so it's mentioned here ' for completeness. query.Dispose() End Try

Projecting Dynamic List resultset onto custom classes
We can go one step further and create a fetch of a dynamic list and fill a list of custom class instances, for example for transportation by a Webservice and you want lightweight Data Transfer Objects (DTO). The projecting a resultset onto custom classes is .NET 2.0 only, as the projection engine uses generics. Of course, you can write your own implementation of IGeneralDataProjector which performs class instantiation and property setting using reflection on .NET 1.x
C#, .NET 2.0 VB.NET, .NET 2.0

// C#, .NET 2.0 List<CustomCustomer> customClasses = new List<CustomCustomer>(); ResultsetFields fields = new ResultsetFields( 4 ); fields[0] = CustomerFields.City; fields[1] = CustomerFields.CompanyName; fields[2] = CustomerFields.CustomerId;

Page 375

fields[3] = CustomerFields.Country; DataProjectorToCustomClass<CustomCustomer> projector = new DataProjectorToCustomClass<CustomCustomer>( customClasses ); // Define the projections of the fields. List<IDataValueProjector> valueProjectors = new List<IDataValueProjector>(); valueProjectors.Add( new DataValueProjector( "City", 0, typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( "CompanyName", 1, typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( "CustomerID", 2, typeof( string ) ) ); valueProjectors.Add( new DataValueProjector( "Country", 3, typeof( string ) ) ); // perform the fetch combined with the projection. TypedListDAO dao = new TypedListDAO(); dao.GetAsProjection( valueProjectors, projector, null, fields, null, null, 0, null, true );

' VB.NET .NET 2.0 Dim customClasses As New List(Of CustomCustomer)() Dim fields As New ResultsetFields( 4 ) fields(0) = CustomerFields.City fields(1) = CustomerFields.CompanyName fields(2) = CustomerFields.CustomerId fields(3) = CustomerFields.Country Dim projector As New DataProjectorToCustomClass(Of CustomCustomer)( customClasses ) ' Define the projections of the fields. Dim valueProjectors As New List(Of IDataValueProjector)() valueProjectors.Add( New DataValueProjector( "City", 0, GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( "CompanyName", 1, GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( "CustomerID", 2, GetType( String ) ) ) valueProjectors.Add( New DataValueProjector( "Country", 3, GetType( String ) ) ) ' perform the fetch combined with the projection. Dim dao As New TypedListDAO() dao.GetAsProjection( valueProjectors, projector, Nothing, fields, Nothing, Nothing, 0, N othing, true ) Where the custom class is: public class CustomCustomer { #region Class Member Declarations private string _customerID, _companyName, _city, _country; #endregion public CustomCustomer() { _city = string.Empty; _companyName = string.Empty; _customerID = string.Empty; _country = string.Empty; } #region Class Property Declarations public string CustomerID { get { return _customerID; }

Page 376

set { _customerID = value; } } public string City { get { return _city; } set { _city = value; } } public string CompanyName { get { return _companyName; } set { _companyName = value; } } public string Country { get { return _country; } set { _country = value; } } #endregion }
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 377

Generated code - Getting started with filtering, SelfServicing
Preface
One of the most powerful aspects of the generated code and the framework it forms is the ability to formulate filters and sortclauses directly in your code and evaluating them at runtime. This means that once the framework has been generated, developers working on business logic code can formulate specific filters to request only that information necessary for the task they're currently working on, without the requirement of a given filter in a special stored procedure. When filters and sort clauses are used to fetch data from the persistent storage (database), the filters and sort clauses are transformed to SQL and embedded into the actual SQL query by the used Dynamic Query Engine (see Dynamic SQL ) and filters are fully parameterized, thus execution plans are preserved in the database server's optimizer while the filters are not constructed with values concatenated into the SQL query itself, so no risks for SQL injection attacks. When filters and sort clauses are used in combination of EntityView2 objects (See: Generated code - using the EntityView class ) they're interpreted in-memory and not converted to SQL. Not all filter constructs available to you for usage with a database are available to you when you're filtering data in-memory. When a predicate class is usable for in-memory usage, it's mentioned with the predicate class. In-memory filtering doesn't use relations , just predicates. This section describes the different ways of constructing filters using predicates and predicate expressions and how to use multi-entity spanning filters as well, using RelationCollection objects. It furthermore tells you how to construct sort clauses to sort the data you requested. The generated code contains factory classes for most predicates and all sort clauses to ease the use of creating the objects. The first subsection describes in an abstract way the philosophy behind the predicate objects. This abstract discussion might look a little complex at first but it describes the way predicates can and should be organized into predicate expressions and when to do that to get the results you want. All predicate construction methods of LLBLGen Pro are compile time checked . This means that if you for example rename a field in the designer, regenerate your code, and recompile your projects, you'll be notified by the compiler where you used the old name and thus which lines you have to update. This is crucial for reliable software development.

Upgrading from v1.0.200x.y: no PredicateFactory
In previous versions of LLBLGen Pro, v1.0.2005.1 and earlier, by default a class called PredicateFactory was generated. This class contained for most predicate classes a convenient construction method for each field in each entity. In larger projects however this lead to a very big class which was unusable in VS.NET due to the high number of overloads of a single method. In v2.0 of LLBLGen Pro this class is no longer generated by default and is discouraged to be used in your code. You can still generate this class however, simply enable to PredicateFactory generation task in the run queue of your preset of choice (See: Designer Generating code ). This documentation will avoid the usage of the PredicateFactory class, unless stated otherwise. If you need information about the PredicateFactory class, please consult the documentation of v1.0.2005.1, still available at our website.

Predicates and Predicate expressions
Filtering is the same for entities, typed views and typed lists: you construct a predicate expression and pass that as parameter to a method which retrieves data or works on data. A predicate is effectively a clause

Page 378

used in a WHERE statement which will result in True or False. 'WHERE' itself is not part of the predicate however. Predicates can be grouped in a predicate expression. Predicate expressions itself can also be grouped inside other predicate expressions. Predicates can be placed inside a predicate expression with the operators 'And' and 'Or'. Also, predicate expressions can be placed inside another predicate expression with the operators 'And' and 'Or'. Now, this might all sound a little complex, so let's illustrate it with an example of a nested WHERE clause with some predicate expressions. ... Some Select statement WHERE ( Table1.Foo = @param1 AND Table1.Bar = @param2 ) OR Table2.Bar2 = @param3

(Table1.Foo = @param1 AND Table1.Bar = @param2) OR Table2.Bar2 = @param3. The following predicates are found in this filter: Table1.Foo = @param1 Table1.Bar = @param2 Table2.Bar2 = @param3 There are 2 predicate expressions found: A. (Table1.Foo = @param1 AND Table1.Bar = @param2) B. (Table1.Foo = @param1 AND Table1.Bar = @param2) OR Table2.Bar2 = @param3 To formulate the filter correctly, we start by constructing an empty predicate expression, B. Let's assume param1 has the value "One", param2 has the value "Two" and param3 has the value "Three".
C# VB.NET

// [C#] IPredicateExpression B = new PredicateExpression(); ' [VB.NET] Dim B As IPredicateExpression = New PredicateExpression() The easiest way to proceed is then to construct predicate expression A:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// [C#] IPredicateExpression A = new PredicateExpression(); A.Add(Table1Fields.Foo == "One"); A.AddWithAnd(Table1Fields.Bar == "Two");

' [VB.NET] .NET 1.x Dim A As IPredicateExpression = New PredicateExpression() A.Add(New FieldCompareValuePredicate(Table1Fields.Foo, ComparisonOperator.Equal, "One")) A.AddWithAnd(New FieldCompareValuePredicate(Table1Fields.Bar, ComparisonOperator.Equal, "Two")) ' [VB.NET] .NET 2.0

Page 379

Dim A As IPredicateExpression = New PredicateExpression() A.Add(Table1Fields.Foo = "One") A.AddWithAnd(Table1Fields.Bar = "Two") A is now constructed and we can add this predicate expression as a single predicate to the predicate expression B:
C# VB.NET

// [C#] B.Add(A); ' [VB.NET] B.Add(A) There is one predicate left, OR Table2.Bar2 = @param3 . Let's add that one with the Or operator directly to B:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// [C#] B.AddWithOr(Table2Fields.Bar2 == "Three"); ' [VB.NET] .NET 1.x B.AddWithOr(New FieldCompareValuePredicate(Table2Fields.Bar2, ComparisonOperator.Equal, "Three")) ' [VB.NET] .NET 2.0 B.AddWithOr(Table2Fields.Bar2 = "Three") B now has been filled with the complete filter. To sum it up, below are the complete sections of code to construct the complete predicate expression
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// [C#] IPredicateExpression B = new PredicateExpression(); IPredicateExpression A = new PredicateExpression(); A.Add(Table1Fields.Foo == "One"); A.AddWithAnd(Table1Fields.Bar == "Two"); B.Add(A); B.AddWithOr(Table2Fields.Bar2 == "Three");

' [VB.NET] .NET 1.x Dim B As IPredicateExpression = New PredicateExpression() Dim A As IPredicateExpression = New PredicateExpression() A.Add(New FieldCompareValuePredicate(Table1Fields.Foo, ComparisonOperator.Equal, "One")) A.AddWithAnd(New FieldCompareValuePredicate(Table1Fields.Bar, ComparisonOperator.Equal, "Two")) B.Add(A) B.AddWithOr(New FieldCompareValuePredicate(Table2Fields.Bar2, ComparisonOperator.Equal, "Three")) ' [VB.NET] .NET 2.0 Dim B As IPredicateExpression = New PredicateExpression() Dim A As IPredicateExpression = New PredicateExpression()

Page 380

A.Add(Table1Fields.Foo = "One") A.AddWithAnd(Table1Fields.Bar = "Two") B.Add(A) B.AddWithOr(Table2Fields.Bar2 = "Three") There is no maximum set for the number of predicate objects you can add to a predicate expression, nor has a maximum been set for the number of predicate expressions you can nest into each other. As a rule of thumb, every set of predicates that should be grouped together as a single boolean expression should be placed in a separate PredicateExpression object: the complete contents of a PredicateExpression object will be placed inside a '()' pair to group the predicates physically in the SQL query.

Creating and working with field objects
The filtering system of LLBLGen Pro uses predicate classes, which use entity field objects (or typed view field objects) to work with. LLBLGen Pro offers a convenient way to produce entity field objects: entityname Fields.FieldName , and typedviewname Fields.FieldName . Example:
C# VB.NET

// C# EntityField companyNameField = CustomerFields.CompanyName; ' VB.NET Dim companyNameField As EntityField = CustomerFields.CompanyName In earlier versions you needed to use:
C# VB.NET

// C# IEntityField companyNameField = EntityFieldFactory.Create(CustomerFieldIndex.CompanyName ); ' VB.NET Dim companyNameField As IEntityField = EntityFieldFactory.Create(CustomerFieldIndex.Comp anyName) To utilize this feature, please add the following code to your code file:
C# VB.NET

// C# using yourrootnamespace.HelperClasses; ' VB.NET Imports yourrootnamespace.HelperClasses In the section The predicate system , filter creation using operator overloading is discussed, which shows how field objects can be utilized together with native C# .NET 1.x/2005 and VB.NET 2005 operators to form predicates.

Setting aliases, expressions and aggregates on fields
To set an aggregate function, an expression (See Field expressions and aggregates ) or an object alias, you can use command chaining by using special methods to set the appropriate property. Typically when you want to set an aggregate function on a field, your code will look like:
C# VB.NET

// C#

Page 381

// create the field EntityField companyNameField = CustomerFields.CompanyName; // set the aggregate companyNameField.AggregateFunctionToUse = AggregateFunction.Sum; ' VB.NET ' create the field Dim companyNameField As EntityField = CustomerFields.CompanyName ' set the aggregate companyNameField.AggregateFunctionToUse = AggregateFunction.Sum Due to the assignment statement, you can't simply specify a field directly with a predicate class constructor and set the aggregate function, expression or object alias at the same time. However, using the EntityField methods SetAggregateFunction(), SetExpression and SetObjectAlias you can. Below an example for a filter to use in a Having clause:
C# VB.NET, .NET 1.x VB.NET, .NET 2.0

// C# // SUM(Quantity) > 4 filter IPredicate filter = (OrderDetailsFields.Quantity.SetAggregateFunction(AggregateFunctions .Sum) > 4); ' VB.NET .NET 1.x ' SUM(Quantity) > 4 filter Dim filter As IPredicate = New FieldCompareValuePredicate( _ OrderDetailsFields.Quantity.SetAggregateFunction(AggregateFunctions.Sum), _ ComparisonOperator.GreaterThan, 4) ' VB.NET .NET 2.0 ' SUM(Quantity) > 4 filter Dim filter As IPredicate = (OrderDetailsFields.Quantity.SetAggregateFunction(AggregateFu nctions.Sum) > 4) Please note that VB.NET 2002/2003 doesn't support operator overloading and has to use the FieldCompareValuePredicate class for the filter construction.

What to include in a filter
For filtering Typed View objects, you can only filter on one or more fields in the Typed View itself. When you want to filtering a Typed List , you can specify one or more fields which are part of the entities forming the base of the Typed List: you use the field objects of the entities included in the typed list, e.g. CustomerFields.CompanyName. When filtering entities, using a method of an entity collection object, you have two possibilities: You filter on fields in the entity type to retrieve. This is the most commonly used type. Example: when you want to retrieve a set of customer objects, you filter on one or more customer fields. You filter on fields in the entity type to retrieve and/or on fields in a related entity of the entity type to retrieve. This is more advanced, an example of this is illustrated in Multi-entity filters In all cases, be sure the field you filter on is in the entity type you want to retrieve or, when you use multientity filtering, be sure the field(s) in the filter are in any of the entities mentioned in the RelationCollection specified.
LLBLGen Pro v2.6 documentation. ©2002-2008 Solutions Design

Page 382

Generated code - The predicate system, SelfServicing
Preface
LLBLGen Pro's filtering capabilities are build using predicate objects , instantiated from predicate classes . This section describes these classes in depth, how to use them, what their purpose is and shows an example for every predicate. To quickly understand which predicate class you need, a handy table is provided below, which should help you converting a WHERE construct in SQL to LLBLGen Pro predicates. These predicate classes are usable in queries on the database and often also in-memory. Below the table a list of predicate classes is given which are solely usable for in-memory filtering. Predicate classes to use for Database or in-memory filtering: SQL Predicate class to use Field BETWEEN 3 AND 5 FieldBetweenPredicate Field BETWEEN field2 AND 4 Field = Field2 FieldCompareExpressionPredicate Field < (Field2 * 4) Field Is NULL FieldCompareNullPredicate Field IN (1, 2, 3, 5) FieldCompareRangePredicate Field IN ( FieldCompareSetPredicate SELECT Field FROM Foo WHERE ...) Field = 3 FieldCompareValuePredicate Field != "Foo" Field LIKE "Foo%" FieldLikePredicate Predicate classes to use for in-memory filtering only: AggregateSetPredicate . This predicate is usable to filter a set of entities based on the result of an aggregate function executed on related entities. Example: all customers which have at least 5 orders. DelegatePredicate . (generic and non-generic variant) This predicate is usable to filter a set of entities based on a function you write yourself. This is the most flexible way to filter entities in memory. MemberPredicate . This predicate is usable to filter a set of entities based on an aspect of a related entity or related set of entities. It's a meta-predicate which applies another predicate onto the related member or members and the result of that (true or false) is used to accept or deny an entity in the set to filter.

The predicate classes
LLBLGen Pro offers you a wide range of different predicate classes to define your filters with. Each predicate class which name starts with Field works on a field, so they all have the form field some expression/operator values . Sometimes this is not enough, you for example want to filter using the predicate (Field * 3) > OtherField. This can be accomplished by adding an expression to the field filtered on. See Field expressions and aggregates for more information about expressions. All predicate classes implement the IPredicate interface and derive from the Predicate class located, as the predicate classes, in the ORMSupportClasses assembly. If you want to add a specific predicate to the pack already offered, you can: implement a class also deriving from Predicate and you're done. To see the full class signatures and their methods, please consult the LLBLGen Pro reference manual's SD.LLBLGen.Pro.ORMSupportClasses namespace. The various examples below utilize the previously discussed entityname Fields classes to create new entity fields to work with the predicate classes. You can also specify existing fields from an entity instance, so these three lines are equivalent:

Page 383

C# VB.NET

// C# IEntityField field = EntityFieldFactory.Create(OrderFieldIndex.OrderDate); EntityField field = OrderFields.OrderDate; IEntityField field = myOrder.Fields[(int)OrderFieldIndex.OrderDate]; ' VB.NET Dim field As IEntityField = EntityFieldFactory.Create(OrderFieldIndex.OrderDate) Dim field As EntityField = OrderFields.OrderDate; Dim field As IEntityField = myOrder.Fields(Cint(OrderFieldIndex.OrderDate)) Please note that the entityname Fields.Fieldname constructors return an EntityField type, not an IEntityField type, to be sure operators work on them (see next section). In-memory filtering and entity properties To filter an EntityView you use in-memory filtering using normal predicate classes, be it the Field* classes or the predicate classes for in-memory filtering. To filter the entities accessable through the EntityView, you can use normal entity fields obtained with the constructs mentioned above. However for filtering on properties which aren't fields, you can't use EntityField instances. To use normal predicate classes to filter on these properties, only in-memory, you can use the class EntityProperty . This class behaves like an EntityField in predicate objects and Expression objects when they're used for in-memory filtering. An example of EntityProperty in action can be found in Generated code - using the EntityView class .

Native language filter construction

Note : The feature discussed in this paragraph is not available for VB.NET 2002/2003 users, because VB.NET 2002/2003 doesn't support operator overloading, and users of VB.NET 2002/2003 have to fall back on the more verbose way to constructing filters. VB.NET 2005 supports operator overloading in full and thus also this feature. C# 2002/2003 supports operator overloading. Getting started with filtering is the ability to formulate filters using compact, native language constructs. In the following sub-sections, which each discuss a predicate class, examples will be given how filters can be formulated using the native language constructs supported by LLBLGen Pro. It doesn't stop there, you can also construct sort expressions and predicate expressions using native language constructs. The following paragraphs will show you how this is performed. Please note that the VB.NET examples only work in VB.NET 2005. Constructing Predicate Expressions In the paragraph Predicates and Predicate expressions , you were introduced to the concepts of predicates and predicate expressions. The following examples will show equivalents of the earlier examples in that paragraph to illustrate how to use native language contstructs to create predicate expressions. These predicate expressions are created by the overloaded operators & and | in C# and And and Or in VB.NET 2005. To produce the same full filter as illustrated in Predicates and Predicate expressions , use the following code. It uses a single step, which skips the separate creation of filter A.
C# VB.NET 2005

// C# IPredicateExpression B = ((Table1Fields.Foo == "One") & (Table1Fields.Bar == "Two")) // A | (Table2Fields.Bar2 == "Three"); pre>' VB.NET 2005 Dim B IPredicateExpression = ((Table1Fields.Foo = "One") And (Table1Fields.Bar = "Two")) _ ' A Or (Table2Fields.Bar2 = "Three")

Page 384

< It's also possible to negate a predicate with the native language operator ! (C#) or Not (VB.NET 2005). Same example as previously, but now the last predicate is negated:
C# VB.NET 2005

// C# IPredicateExpression B = ((Table1Fields.Foo == "One") & (Table1Fields.Bar == "Two")) // A | !(Table2Fields.Bar2 == "Three"); ' VB.NET 2005 Dim B IPredicateExpression = ((Table1Fields.Foo = "One") And (Table1Fields.Bar = "Two")) _ ' A Or Not (Table2Fields.Bar2 = "Three") To chain several predicates together into a single predicate expression, you can also consider the AddWithAnd and AddWithOr methods of the PredicateExpression object. Every (predicate Operator predicate) statement results in a PredicateExpression object. If you want to do this: (the example shows values in the WHERE clause, LLBLGen Pro always generates parameters for values, it never includes any value into the query directly)

... WHERE TableFields.Foo = "One" OR TableFields.Foo = "Two" OR TableFields.Foo = "Three" OR TableFIelds.Foo = "Four" you should use this code:
C# VB.NET 2005

// C# IPredicateExpression filter = ((TableFields.Foo=="One") | (TableFields.Foo=="Two")) .AddWithOr(TableFields.Foo=="Three") .AddWithOr(TableFields.Foo=="Four); // which is equal to: IPredicateExpression filter = new PredicateExpression(); filter.Add(TableFields.Foo=="One") .AddWithOr(TableFields.Foo=="Two") .AddWithOr(TableFields.Foo=="Three") .AddWithOr(TableFields.Foo=="Four); // which is equal to: IPredicateExpression filter = new PredicateExpression(); filter.Add(new FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "On e")) .AddWithOr(new FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "T wo")) .AddWithOr(new FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "T hree")) .AddWithOr(new FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "F our")); ' VB.NET 2005 Dim filter As IPredicateExpression = _ ((TableFields.Foo="One") Or (TableFields.Foo="Two")).AddWithOr(TableFields.Foo="Three")

Page 385

.AddWithOr(TableFields.Foo="Four) ' which is equal to: (VB.NET 2005) Dim filter As New PredicateExpression() filter.Add(TableFields.Foo="One").AddWithOr(TableFields.Foo="Two").AddWithOr(TableFields .Foo="Three").AddWithOr(TableFields.Foo="Four) ' which is equal to: (VB.NET 2002/2003/2005) Dim filter As New PredicateExpression() ' NOTE: the following line is specified on multiple lines here for readability, VB.NET w ants you to mention the statements on 1 line filter.Add(New FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "On e")) _ .AddWithOr(New FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "T wo")) _ .AddWithOr(New FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "T hree")) _ .AddWithOr(New FieldCompareValuePredicate(TableFields.Foo, ComparisonOperator.Equal, "F our")) Fields and operators Various operators are defined to work with the EntityField objects to form filters, or even expressions (See for expressions: Field expressions and aggregates ). The list below will guide you what kind of object is created when you use the specified operator on an EntityField object. An example of the usage is given as well. It's recommended you consult the LLBLGen Pro reference manual, located in your LLBLGen Pro installation folder, and check which operators are defined on which classes, for example the operators on SD.LLBLGen.Pro.ORMSupportClasses.EntityField. For example the '==' / '=' operator (Equality) can produce different predicate object types, depending on the right hand side value: if it's another EntityField object or an Expression, a FieldCompareExpressionPredicate is produced, if it's null / Nothing, a FieldCompareNullPredicate is produced. See the reference manual and the predicate classes below for more details. C# + | / VB.NET Descr. 2005 + Or / Addition SortClause construction Division Example (OrderDetailsFields.Quantity + 10) (CustomerFields.CompanyName | SortOperator.Ascending) (OrderDetailsFields.Quantity / 10) (CustomerFields.CompanyName == "Foo Inc.") (OrderDetailsFields.Quantity > 10) Object produced Expression SortClause Expression FieldCompareValuePredicate, FieldCompareExpressionPredicate, FieldCompareRangePredicate, FieldCompareNullPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate, FieldCompareRangePredicate, FieldCompareNullPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate FieldCompareValuePredicate, FieldCompareExpressionPredicate FieldLikePredicate

==

=

Equality

> >=

> >=

Greater Than

Greater Than (OrderDetailsFields.Quantity >= Or Equal 10) Equality (CustomerFields.CompanyName != "Foo Inc.") (OrderDetailsFields.Quantity < 10) (OrderDetailsFields.Quantity <= 10) (CustomerFields.CompanyName % "Foo%")

!=

<>

< <= %

< <= Mod

Lesser Than Lesser Than Or Equal Like creation

Page 386

* -

* -

Multiplication Substraction

(OrderDetailsFields.Quantity * 10)

Expression

(OrderDetailsFields.Quantity Expression 10) Not all predicate classes can be created with the operator overloads, for example FieldBetweenPredicate and FieldCompareSetPredicate aren't constructable using operator overloads like discussed in the previous table. Nevertheless, using the field construction classes and the Set*() methods, it will be easy to construct these predicates as well. Mixing of construction ways Because the combination of an EntityField and an operator produces a normal object like a FieldCompareValuePredicate or an Expression object, you can mix the construction ways in your code. So for example, the easy way to construct predicate expressions shown earlier, works also on objects created using the predicate classes directly, as shown in the following example which shows a mix of the two techniques:
C# VB.NET

// C# PredicateExpression filter = (OrderFields.CustomerID == "CHOPS") & (new FieldBetweenPredicate(OrderFields.ShippedDate, dateStart, dateEnd)); ' VB.NET Dim filter As PredicateExpression = (OrderFields.CustomerID = "CHOPS") And _ (New FieldBetweenPredicate(OrderFields.ShippedDate, dateStart, dateEnd))

Predicate classes for database queries or in-memory filtering
FieldBetweenPredicate

Note : The SQL examples given can contain absolute values. The SQL generated by the predicates will never contain absolute values as absolute values will be converted to parameters. Description compares the entity field specified using a BETWEEN operator and results in true if the field's value is greater than or equal to the value of valueBegin and less than or equal to the value of valueEnd. valueBegin and valueEnd can be any value or an EntityField, as shown in the examples. SQL equivalent Operators Example
C# VB.NET

Field BETWEEN valueStart AND valueEnd Field BETWEEN OtherField AND valueEnd Field BETWEEN valueStart AND OtherField none.

// C# filter.Add(new FieldBetweenPredicate( OrderFields.OrderDate, dateStart, dateEnd)); // or: ShippingDate BETWEEN RequiredDate and dateEnd filter.Add(new FieldBetweenPredicate( OrderFields.ShippingDate, OrderFi elds.RequiredDate, dateEnd));

Page 387

' VB.NET filter.Add(new FieldBetweenPredicate( OrderFields.OrderDate, dateStart, dateEnd)) ' or: ShippingDate BETWEEN RequiredDate and dateEnd filter.Add(new FieldBetweenPredicate( OrderFields.ShippingDate, OrderFi elds.RequiredDate, dateEnd)) Can be used for Yes in-memory filtering

FieldCompareExpressionPredicate
Description compares the entity field specified with the expression specified, using the ComparisonOperator specified. See for a detailed description about expressions the Field expressions and aggregates section SQL equivalent examples Operators Example Field > (OtherField * 2) Field <= OtherField All ComparisonOperator operators: Equal, GreaterEqual, GreaterThan, LessEqual, LesserThan, NotEqual This example creates a predicate which compares Order.OrderDate with Order.ShippingDate. The example might look a little verbose, but Expression objects are re-usable, which allows you to define the Expression objects once and re-use them each time you need them with a predicate class for example. The example illustrates a very basic expression, but the expression you can specify can be very complex. See for more information about expressions Field expressions and aggregates
C# VB.NET

// C# filter.Add(new FieldCompareExpressionPredicate( OrderFields.OrderDate, ComparisonOperator.Equal, new Expression(OrderFields.ShippedDate))); // which is equal to: filter.Add((OrderFields.OrderDate == OrderFields.ShippedDate)); ' VB.NET filter.Add(New FieldCompareExpressionPredicate( _ OrderFields.OrderDate, ComparisonOperator.Equal, _ New Expression(OrderFields.ShippedDate))) ' which is equal to: (VB.NET 2005) filter.Add((OrderFields.OrderDate = OrderFields.ShippedDate)) Example which filters on orders which have been shipped 4 days after the orderdate:
C# VB.NET

// C# filter.Add(new FieldCompareExpressionPredicate( OrderFields.ShippedDate, ComparisonOperator.Equal, new Expression(OrderFields.OrderDate, ExOp.Add, 4))); // which is equal to: filter.Add((OrderFields.ShippedDate == (OrderFields.OrderDate + 4)));

Page 388

' VB.NET filter.Add(New FieldCompareExpressionPredicate( _ OrderFields.ShippedDate, ComparisonOperator.Equal, _ New Expression(OrderFields.OrderDate, ExOp.Add, 4))) ' which is equal to: (VB.NET 2005) filter.Add((OrderFields.ShippedDate == (OrderFields.OrderDate + 4))) Can be used for Yes in-memory filtering

FieldCompareNullPredicate
Description compares the entity field specified with NULL. SQL equivalent Operators Example
C# VB.NET

Field IS NULL none.

// C# filter.Add(new FieldCompareNullPredicate(OrderFields.OrderDate)); // which is equal to: filter.Add((OrderFields.OrderDate==System.DBNull.Value)); ' VB.NET filter.Add(New FieldCompareNullPredicate(OrderFields.OrderDate)) ' which is equal to: (VB.NET 2005) filter.Add((OrderFields.OrderDate = System.DBNull.Value)) Can be used for Yes in-memory filtering

FieldCompareRangePredicate

Page 389

Description compares the entity field specified with the range of specified values using the IN operator. The range is not a subquery, use FieldCompareSetPredicate for that. The range can be supplied in an ArrayList, in an array or hardcoded in the predicate constructor. SQL equivalent examples Operators Example Field IN (1, 2, 5, 10) Field IN ("Foo", "Bar", "Blah") none. This example creates a predicate which compares Order.EmployeeId with the range 1, 2, 5, stored in an array. The values can also be specified directly in the constructor.
C# VB.NET

// C# int[] values = new int[3] {1, 2, 5}; filter.Add(new FieldCompareRangePredicate(OrderFields.EmployeeId, value s)); // which is equal to: filter.Add(OrderFields.EmployeeId == values);

' VB.NET Dim values As Integer() = New Integer(2) {1, 2, 5} filter.Add(New FieldCompareRangePredicate(OrderFields.EmployeeId, value s)) ' which is equal to (VB.NET 2005) filter.Add(OrderFields.EmployeeId = values) Can be used for Yes in-memory filtering

FieldCompareSetPredicate
Description compares the entity field specified with the set of values defined by the subquery elements, using the SetOperator specified. The FieldCompareSetPredicate is the predicate you'd like to use when you want to compare a field's value with a range of values retrieved from another table / view (or the same table / view) using a subquery. FieldCompareSetPredicates also allows you to define EXISTS () queries. It is then not necessary to specify an IEntityField object with the predicate's constructor (specify null / nothing) as it is ignored when building the SQL. Keep in mind that EXISTS() queries are semantically the same as IN queries and IN queries are often simpler to formulate. The FieldCompareSetPredicate supports advanced comparison operators like ANY, ALL and combinations of these with comparison operators like Equal (=) or GreaterThan (>). If the set is just 1 value in size (because you've specified a limit on the number of rows to return), it's wise to use the Equal operator instead of the IN operator as most databases will be rather slow with IN and just 1 value compared to the Equal operator. SQL equivalent examples Operators Field IN (SELECT OtherField FROM OtherTable WHERE Foo=2) EXISTS (SELECT * FROM OtherTable) All SetOperator operators: In, Exists, Equal, EqualAny, EqualAll, LessEqual, LessEqualAny, LessEqualAll, LesserThan, LesserThanAny, LesserThanAll, GreaterEqual, GreaterEqualAny, GreaterEqualAll, GreaterThan, GreaterThanAny, GreaterThanAll, NotEqual, NotEqualAny, NotEqualAll

Page 390

Example

This example illustrates the query: Customer.CustomerID IN (SELECT CustomerID FROM Orders WHERE Employee=2)
C# VB.NET

// C# filter.Add(new FieldCompareSetPredicate( CustomerFields.CustomerID, OrderFields.CustomerID, SetOperator.In, (OrderFields.EmployeeID == 2))); ' VB.NET filter.Add(New FieldCompareSetPredicate( _ CustomerFields.CustomerID, OrderFields.CustomerID, _ SetOperator.In, _ New FieldCompareValuePredicate(OrderFields.EmployeeID, _ ComparisonOperator.Equal, 2))) ' which is equal to: (VB.NET) filter.Add(New FieldCompareSetPredicate( _ CustomerFields.CustomerID, OrderFields.CustomerID, _ SetOperator.In, (OrderFieldIndex.EmployeeID = 2))) Can be used for No in-memory filtering

FieldCompareValuePredicate
Description compares the entity field specified with the value sp