I understand how to compute significant size for an individual model (using Table 3-1: CDB LOD vs. Model Resolution). My question is what is the best way to apply the information in this table to CDB tiles themselves? In other words what interpretation of significant size should be used to determine the appropriate LOD to choose at a particular viewpoint? Do you use the bounding sphere of the tile to compute the significant size? Do you scale the values in the table by the radius of the tile's bounding sphere? Some other metric? I haven't found a good explanation of this in the specification (or I am missing something obvious).
Can you confirm which dataset you are talking about. If it is the GSFeature or GTFeature, the LOD rule between the tiled data (shapefile) and the model is governed by the sigsize. A GS Feature tile at LOD2 can contain models of height (spec 3.0) 27.2m and above.
The decision of which LOD of features (GS/GT feature vector) to load depends on your client devise. For a visual with a certain screen resolution and FOV, you can compute the distance at which a LOD2 feature will start to be visible (2+ pixel). Same goes for every LOD. You then have a performance constraint which may lead you to scale those distances to reduce the load. In most CDB application, you will have parameters to tune those paging distances depending on desired visual quality, application type and h/w performance.
So, in CDB, it is up to the client devise to decide when to page features-in, it is not coded in the database. However, the CDB LODs are structures to facilitate that "runtime decision".
Does that answer the questions?
That makes sense. Thanks for the information Herm.