Any Gotchas with this strategy?
I am about to put together a reporting / end user interface for our
application. Because the reports generated will be of a "document type"
(invoices, delivery notes etc), the reporting process needs to be fast.
Moreover, the grouping and sorting of data is critical, and not always on
indexed fields. I have found the query based DADE extremely inefficient and
slow for these types of reports; the other method of deploying multiple
TppDBpipelines mapped to Master-Detail-Detail-Detail-Lookup etc tables
(there can be up to 15 or so detail / lookup tables) is fast, but requires
too many subreports for the necessary grouping and sorting and end users
will get confused, and it still restricts me to indexed fields.
My solution is outlined below. It seems to work very well but before I
really put my back into implementing it are there any gotchas that anyone
can anticipate? As a newbie RB Pro user, have I missed something obvious and
come up with a crack-pot solution that is completely unnecessary? Comments
appreciated
1. I have adapted a MemoryTable component that that allows you to i) assign
a table component that is the ultimate Master table of a complex set of
linked Master-Detail-Lookup tables in a datamodule; ii) on assigning the
master the memory table fields are created and a FlatMemoryTable is
effectively produced from all the tables in the set.; iii) the data from all
these tables can then be loaded into the FlatMemoryTable at design or
runtime.
2. I then only need one TppDBpipeline which is linked to the memory table.
Each field name gets includes the source table name it comes from to prevent
duplication and to assist in identifying the database. I can then create
multiple nested groups in the main report based on ANY of the fields in the
flatmemory table; moreover I can synchronise the creation of composite
in-memory indexes to match the groups in the report to ensure correct
sorting. And by using the TppDBpipeline Fields Editor I can detemine which
fields will be available to the end user.
This all seems to work very well and very quickly on quite large
databases...although I've yet to test over a network.
--
Paul Toms mailto:pault@navigator.co.uk
application. Because the reports generated will be of a "document type"
(invoices, delivery notes etc), the reporting process needs to be fast.
Moreover, the grouping and sorting of data is critical, and not always on
indexed fields. I have found the query based DADE extremely inefficient and
slow for these types of reports; the other method of deploying multiple
TppDBpipelines mapped to Master-Detail-Detail-Detail-Lookup etc tables
(there can be up to 15 or so detail / lookup tables) is fast, but requires
too many subreports for the necessary grouping and sorting and end users
will get confused, and it still restricts me to indexed fields.
My solution is outlined below. It seems to work very well but before I
really put my back into implementing it are there any gotchas that anyone
can anticipate? As a newbie RB Pro user, have I missed something obvious and
come up with a crack-pot solution that is completely unnecessary? Comments
appreciated
1. I have adapted a MemoryTable component that that allows you to i) assign
a table component that is the ultimate Master table of a complex set of
linked Master-Detail-Lookup tables in a datamodule; ii) on assigning the
master the memory table fields are created and a FlatMemoryTable is
effectively produced from all the tables in the set.; iii) the data from all
these tables can then be loaded into the FlatMemoryTable at design or
runtime.
2. I then only need one TppDBpipeline which is linked to the memory table.
Each field name gets includes the source table name it comes from to prevent
duplication and to assist in identifying the database. I can then create
multiple nested groups in the main report based on ANY of the fields in the
flatmemory table; moreover I can synchronise the creation of composite
in-memory indexes to match the groups in the report to ensure correct
sorting. And by using the TppDBpipeline Fields Editor I can detemine which
fields will be available to the end user.
This all seems to work very well and very quickly on quite large
databases...although I've yet to test over a network.
--
Paul Toms mailto:pault@navigator.co.uk
This discussion has been closed.
Comments
and have hundreds of reports. I use the TMemDataSet component which
is easy to use, and makes sorting simple, though most mem tables do.
On Mon, 23 Jul 2001 05:13:41 -0500, "Paul Toms"
the tables definitions in a report file's offset area which completely frees
up an end user solution from my compiled code without going down the DADE
query route :-)