Question: Since Geo-Specific Model LOD decimation is purely significant size based in CDB 3.0 (3.3.2.1.1 GSModelGeometry Level-of-Details), what influence can or should RTAI have outside the bounds of a specific CDB LOD unless it also influences the decimation? If it should indeed influence the decimation, the exact method needs to be specified. Even within a CDB LOD, RTAI application is problematic for the following reasons.
Although the facilitation of client device load management has significantly improved from 2.1 to 3.0, and it is somewhat further improved by the 3.1 Geo-Specified Model LOD proposal, it is still problematic. One issue is how to correlate point features from one CDB LOD to the next without an exhaustive run-time position and attribute comparison that would affect client device determinism. (Network datasets use unique ids to provide a similar correlation means such as EJID, JID etc.)
Answer:
Yes, RTAI influences the decimation in run time. The use of RTAI can affect the performance of the client devices in basically 2 ways:
1) It may be used to "dynamically" control the run time load of the client device. In this case, this is entirely done by the existing scene loadmanagement mecanism of the client device. RTAI in this case specifies the order of importance in case where there are two object or more object of same significant size range.
2) It may be used to "statically" control the run time load of the client device. In this case, the key point is the representation of the finest level of detail that the client device can process. The finest representation is represented by a Tile in the client device that normally match a CDB LOD Tile. If the representation of the finest CDB LOD Tile cannot be ingested by the client device, there is 2 options left to the application: First option is not to display the CDB tile and rely on a coarser representation of the tile. This choice, although is perfectly legal with CDB, is not optimal as it forces 4 tiles to be converted into a single coarse tiles when only one finer tile may not meet the client device capability. A preferred option is to decimate the finest CDB LOD tile on a per object basis based on the RTAI. The order of decimation at the finest LOD is not specified in the CDB spec and currently left to the application.
An application may decide to decimate from the coarsest i.e. fill the maximum objects at the coarsest level and populate gradually to the finest based on the RTAI. This approach ensures that the maximum number of object is processed. Another way is to decimate from the finest i.e. populate the finest first and decimate based on the RTAI. This ensure the object are displayed at their finest level however there will be fewer objects of lesser importances. Both approaches are deterministic at the finest level but are application dependant.
Once the decimation approach is determined at the finest level, the implementation must ensure that the intermediate LODs are coherent i.e a model cannot switch from coarse to fine and them back to coarse. This can be done by establishing pre-determined parameter budget per client device tile and by ensuring that the budget of the finer tile is always equal or greater than the coarser tile and by checking the object LODs of the finer representation of the tile being processed. The run time publisher in that sense "programs" the client device within the capability of the client device.
Question:
Another related issue is how to tell a newly appearing feature from a new exchange LOD representation of a previously specified (coarser LOD) feature. Scaling back database content to meet client device capacities usually requires this type of knowledge. Otherwise, features thinned or prioritized in coarser LODs may not correlate with similar decisions made while publishing finer LODs. The CDB does not currently appear to provide enough information to accomplish this in a deterministic and performant manner.
Answer:
The CDB has enough information to determine a newly appearing feature from a new exchange LOD representation of a previously specified (coarser LOD) feature. See answer above. However, an application note should be posted. Nothing prevents a run time publisher to consult the GS and GT feature dataset of a finer LOD or a coarser LOD.
Question/Comment:
It seems the only reasonable way to currently scale Geo-Specified Model point features to client capacity limits is to back off on the entire CDB Geo-Specific Model LOD. Only if features can be correlated, or if newly appearing features can be distinguished from exchange LOD refinement, can a selective mechanism taking into account RTAI be used to thin features within the CDB LOD.
Answer:
See proposed approach above to achieve correlation and determinism within a CDB LOD. Back off the entire CDB LOD can be done. In that case the RTAI would be used only for "dynamically" control the run time load of the client device of the objects in a client device tile.
Since Geo-Specific Model LOD decimation is purely significant size based in CDB 3.0 (3.3.2.1.1 GSModelGeometry Level-of-Details), what influence can or should RTAI have outside the bounds of a specific CDB LOD unless it also influences the decimation? If it should indeed influence the decimation, the exact method needs to be specified.
Yes, RTAI influences the decimation in run time.
Obviously that is the intent, as that is the defined purpose of the attribute. When I said influence the decimation, I meant the decimation defined by the CDB specification for the creation of 3.3.2.1.1 GSModelGeometry Level-of-Detail; not any run-time behavior. To restate this part of the question:
Should GSModelGeometry LODs be decimated by the CDB creation tools using RTAI in addition to significant size criteria? If so, the CDB specification should define the joint decimation criteria since it currently specifies only significant size criteria.
2) It may be used to "statically" control the run time load of the client device. In this case, the key point is the representation of the finest level of detail that the client device can process. [...] A preferred option is to decimate the finest CDB LOD tile on a per object basis based on the RTAI.
Agreed. At this point, there are many possible RTP decimation options to meet client capacity requirements. An RTP could choose to favor adding models rather than increasing model fidelity, or it could choose the opposite approach. It could rely on RTAI to balance model fidelity and density trade-offs, or it could choose one of many other such criteria. The point is that for an RTP to implement most of these criteria without artifact, it is necessary to know if an object is being added or refined in the LOD being decimated. If it is being refined, it is also beneficial to know how to correlate the refined model with its coarser representation.
I do not believe that the CDB GSModelGeometry LOD dataset adequately addresses RTP performance needs with respect to client capacity decimation because of the computational complexity required to correlate model representations and identify newly appearing models.
Once the decimation approach is determined at the finest level
The assumption in this statement that the finest LOD level can be computed first, and then coarser LODs can be deterministically decimated from it, is an inherently non-scalable proposition. It is not feasible in a flight simulator, for instance, to compute all the finest resolution LODs from the ownship to the horizon and then use them to deterministically decimate in order to fit client capacity limits as a function of distance or angular size. Page-able, incremental (from coarse to fine) deterministic LOD decimation and publishing is absolutely required.
The CDB has enough information to determine a newly appearing feature from a new exchange LOD representation of a previously specified (coarser LOD) feature. See answer above. However, an application note should be posted. Nothing prevents a run time publisher to consult the GS and GT feature dataset of a finer LOD or a coarser LOD.
It is certainly possible for a RTP to load two consecutive GSModelGeometry LODs, perform a spatial sort, and compare names and attributes of those features that appear to be coincident in order to determine this information. However, I believe the computational complexity and overhead required to process substantially dense LODs justifies a more direct correlation method should be provided by the GSModelGeometry LOD dataset. This seems especially true since the correlation is known by the CDB creation tool when it sorts each exchange LOD of a logical model into a specific GSModelGemetry LOD, thereby separating it into difficult to re-correlate pieces. It seems unfair to require each RTP needing this information to reconstruct it when the information was lost by reformatting the data ostensibly for RTP performance benefits.
Thank you again for your time and effort in responding to my questions.
2) It may be used to "statically" control the run time load of the client device. In this case, the key point is the representation of the finest level of detail that the client device can process.
I'd just like to point out that there are often many finest level of details that the client device can process since this is often budgeted as a function of range or angular size. As such, the need for RTP decimation (coarse capacity management) can occur at any/all client device LOD levels.
Craig
Since Geo-Specific Model LOD decimation is purely significant size based in CDB 3.0 (3.3.2.1.1 GSModelGeometry Level-of-Details), what influence can or should RTAI have outside the bounds of a specific CDB LOD unless it also influences the decimation? If it should indeed influence the decimation, the exact method needs to be specified. Even within a CDB LOD, RTAI application is problematic for the following reasons.
Although the facilitation of client device load management has significantly improved from 2.1 to 3.0, and it is somewhat further improved by the 3.1 Geo-Specified Model LOD proposal, it is still problematic. One issue is how to correlate point features from one CDB LOD to the next without an exhaustive run-time position and attribute comparison that would affect client device determinism. (Network datasets use unique ids to provide a similar correlation means such as EJID, JID etc.) Another related issue is how to tell a newly appearing feature from a new exchange LOD representation of a previously specified (coarser LOD) feature. Scaling back database content to meet client device capacities usually requires this type of knowledge. Otherwise, features thinned or prioritized in coarser LODs may not correlate with similar decisions made while publishing finer LODs. The CDB does not currently appear to provide enough information to accomplish this in a deterministic and performant manner.
It seems the only reasonable way to currently scale Geo-Specified Model point features to client capacity limits is to back off on the entire CDB Geo-Specific Model LOD. Only if features can be correlated, or if newly appearing features can be distinguished from exchange LOD refinement, can a selective mechanism taking into account RTAI be used to thin features within the CDB LOD.
Please let me know if I have overlooked some feature of the CDB that provides the information necessary to accomplish this type of capacity scaling, or if there is an algorithm I have overlooked that can accomplish the task without this supplemental information. Thank you.