Towards Dynamic Urban Scene Synthesis: The Digital Twin Descriptor Service
Abstract
Digital twins have been introduced as supporters to city operations, yet existing scene-descriptor formats and digital twin platforms often lack the integration, federation, and adaptable connectivity that urban environments demand. Modern digital twin platforms decouple data streams and representations into separate architectural planes, fusing them only at the visualization layer and limiting potential for simulation or further processing of the combined assets. At the same time, geometry-centric file standards for digital twin description, and services built on top of them, focus primarily on explicitly declaring geometry and additional structural or photorealistic parameters, making integration with evolving context information a complicated process while limiting compatibility with newer representation methods. Additionally, multi-provider federation, critical in smart city services where multiple stakeholders may control distinct infrastructure or representation assets, is sparsely supported. Consequently, most pilots isolate context and representation, fusing them per use case with ad hoc components and custom description files or glue code, which hinders interoperability. To address these gaps, this paper proposes a novel concept, the 'Digital Twin Descriptor Service (DTDS)' that fuses abstracted references to geometry assets and context information within a single, extensible descriptor service through NGSI-LD. The proposed DTDS provides dynamic and federated integration of context data, representations, and runtime synchronization across heterogeneous engines and simulators. This concept paper outlines the DTDS architectural components and description ontology that enable digital-twin processes in the modern smart city.