Tributech
Digital Twins
Description
Maximilian Mayr von Tributech gibt in seinem devjobs.at TechTalk einen Einblick in das Thema von Digital Twins und was die Apollo 13 Mission damit zu tun hat.
By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.
Video Summary
In "Digital Twins," Maximilian Mayr (Tributech) uses the Apollo 13 simulator analogy to show how digital twins represent physical assets, collect data, issue commands, and organize systems across manufacturing, healthcare, smart cities, and data economies. He focuses on DTDL (JSON‑LD)—with interfaces plus telemetry, properties, commands, relationships, and components—and DTMI identifiers, including an example model/instance with metadata. Viewers learn how to abstract IoT devices, mock communications, auto‑generate generic UIs, maintain consistent end‑to‑end data models, enhance traceability via telemetry, and explore platforms like Eclipse D2 and a Microsoft digital twin service.
Digital Twins, DTDL, JSON‑LD, and DTMI for Engineers: Our Deep‑Dive Recap of “Digital Twins” by Maximilian Mayr (Tributech)
Why a “twin” at all? The Apollo 13 bridge
Maximilian Mayr opens “Digital Twins” with a vivid story: the Apollo 13 mission. On the way to the Moon, the crew faced an unidentified problem. “Houston, we have had a problem” was all they could say. Exiting the spacecraft to inspect the issue was impossible. The solution came from Earth—simulators equipped with the same hardware as the real spacecraft and connected to computers. Engineers adapted these simulators, reproduced the fault, tested workarounds, and sent the fix back to space.
This captures the essence: a twin that parallels the real system, mirrors behavior, and lets engineers test safely. A digital twin extends that idea into software—a digitalized counterpart that receives data, sends commands, organizes into groups, and manages real‑world systems in a digital context.
From physical simulator to digital twin
Translating the Apollo setup into today’s practice leaves us with key ingredients: connectivity, data, a model, and the ability to represent state and actions consistently. The digital part adds the following:
- Telemetry streams mirror sensor data from the physical world.
- Digital models define properties (e.g., “continue measurement”), commands (“measure every 10 minutes”), and relationships (“room belongs to building”).
- Instances of those models represent concrete devices, rooms, or systems.
- Grouping and relationships let us manage complex domains—think of a smart city—with structure and clarity.
In short, a digital twin is a formal, machine‑readable representation of a real system—an entity that both communicates state and enables action.
Where digital twins fit: manufacturing, healthcare, smart cities, data economies
Mayr keeps the scope broad and highlights domains where twins make a tangible difference:
- Manufacturing: Machines with sensors can be modeled as twins with structured access to telemetry.
- Healthcare: A heart can be modeled as a digital twin to standardize access to heart rate data.
- Smart city: Each building is a twin that contains rooms; rooms have sensors or lights. Districts group buildings; streets host entities like cars and persons—again, candidates for digital twins.
- Data economies: Standardized, sharable data streams—like a temperature sensor—become discoverable via a twin model. Consumers know what the sensor is and how to access the stream.
The takeaway: digital twins let us model almost everything—entities and relations—if we standardize the way we describe them.
DTDL: the modeling language built on JSON‑LD
To make twins interoperable across devices, services, and teams, we need a clear description language. Enter the Digital Twins Definition Language (DTDL): a modeling language initiated by Microsoft with industry partners, open source, and based on JSON‑LD (JSON‑Linked Data). Many web developers know JSON‑LD from website metadata; DTDL leverages the same paradigm to model IoT devices and their digital twins.
A key insight Mayr underlines is that, in modeling, “some restrictions are always better than no restrictions.” DTDL gives you exactly the building blocks you need for most IoT scenarios. That controlled expressiveness keeps models readable, reusable, and tooling‑friendly.
The DTDL building blocks
DTDL provides six building blocks plus an interface container to describe twins precisely:
- Interface: The container that groups the other blocks. Interfaces can extend other interfaces (inheritance) and aggregate telemetry, properties, commands, relationships, and components.
- Telemetry: Data a device emits as a stream or at intervals—e.g., current temperature.
- Properties: State or configuration—e.g., “continue measurement,” on/off. Properties can take many schemas: string, integer, duration, map, object, or even another schema/interface.
- Commands: Calls you send to the device—e.g., “return the current temperature once (not as a stream),” “switch on/off,” “measure every 10 minutes.” Conceptually, these are functions/methods with defined inputs and outputs.
- Relationships: They model structure and belonging—districts have buildings; buildings have rooms; rooms belong to houses.
- Component: A minimal, self‑contained unit within an interface. If a device should not be modeled as multiple interfaces, a component captures it as the smallest possible unit.
This gives practitioners exactly what they need: measurements, state, actions, structure, and reuse.
DTMI: unambiguous identity for models
Once you have a model, you need a robust, human‑readable identifier. DTDL defines the Digital Twin Model Identifier (DTMI) for this purpose. A DTMI has three core parts:
- Scheme: always “dtmi”.
- Namespace/path: commonly built by reversing your fully qualified domain name—e.g., taking “io.tribotech.io” and turning it into “io:tribotech”—and then appending any path you need. The reason is to minimize collisions; if they do occur, they are more likely local and easier to resolve (say, within a country/region).
- Version number: models evolve, and versions are essential for change management and compatibility.
The result is a readable, globally unique identifier for each model, simplifying referencing, versioning, and instantiation.
From model to instance: inheritance, contents, metadata
Mayr contrasts a model with its instance. On the model side, you see inheritance via “extends,” and on the instance side you see concrete properties and a metadata link back to the model via its DTMI. Highlights include:
- Inheritance (extends): reuse common parts and capture specialization cleanly.
- contents: where the building blocks live—telemetry, properties, commands, relationships, components.
- Metadata on each building block: descriptions, comments, display names, writability, and the schema (data type).
- Model reference in the instance: a metadata tag points back to the model via DTMI so every running instance is tied to its model.
This separation keeps the contract clean: the model is the specification; the instance is the concrete realization.
Why developers benefit: abstraction, mocking, UI generation, consistency
Mayr distills the developer benefits of the digital twin model standard:
- Abstract over device‑specific quirks: You don’t need to know every device out there; you only need to know the building blocks and how they can be combined.
- Mock data and server responses: Because valid communication is defined, you can reliably mock telemetry and command responses.
- Generate generic UIs: “You only have to represent each building block and then you can just basically stick it together.” That enables generic, model‑driven UIs instead of bespoke screens per device.
- Consistent representation across layers: The twin file becomes a single source of truth from device to backend to frontend.
- Domain‑driven modeling: Use the same building blocks to express your domain language, regardless of sector.
- Traceability via telemetry: When something changes and goes wrong, telemetry will signal it immediately, making incident response and post‑change analysis easier.
As Mayr puts it, it’s “basically like an open API standard just for IoT devices.” That analogy resonates in day‑to‑day engineering: explicit contracts, documented interfaces, and strong potential for generation and validation.
Platform options: Eclipse D2 project and Microsoft’s “Asia” platform
For working with digital twins in practice, Mayr points to two platforms:
- Eclipse D2 project: completely open source; you can create, talk to, and manage twins.
- Microsoft’s “Asia” platform: a digital twin service that allows you to manage your twins.
The key point is that DTDL‑based models have operational homes—places where instances live, telemetry flows, and commands execute.
A practical path: modeling a temperature sensor with DTDL
To translate the talk’s ideas into a hands‑on path, here is a straightforward process that uses only the building blocks and practices Mayr describes:
- Define the real‑world entity: a temperature sensor.
- Create an interface: This will be the model container.
- Add telemetry: “current temperature” as a stream of measurements. The schema captures the numeric type (and unit as part of the schema).
- Add properties: e.g., “poweredOn” (boolean) and “measurementInterval” (duration). Set writability as appropriate.
- Add commands: e.g., “readOnce” for a single temperature value, “setInterval(10 minutes),” “switchOn/Off.”
- Add relationships: Sensor belongs to a room; room belongs to a building; building belongs to a district. This ties into smart‑city‑like structures.
- Use components if needed: If the sensor contains internal modules, bundle them as components—or keep the sensor as the smallest unit.
- Assign a DTMI: scheme “dtmi,” reversed domain, a meaningful path (e.g., “sensor.temperature”), and a version “;1.”
- Fill out metadata: descriptions, display names, comments, writability, and schemas for each block.
- Instantiate the model: A concrete sensor at location X references the model by DTMI and sets initial property values.
- Connect telemetry: Wire the physical stream into the twin’s telemetry; map commands to device actions.
- Generate a generic UI: Render building blocks (telemetry views, property editors, command invocations) without device‑specific hardcoding.
- Test with mocks: Simulate data flows, exercise commands, and validate error handling. Because the allowed exchanges are defined, mocks are reliable.
This path reflects the session’s core principle: use a small, precise set of building blocks to get from a domain model to a running instance.
Patterns for complex domains: smart city and healthcare
Two domains demonstrate the modeling power particularly well:
- Smart city: Buildings as twins, containing rooms as twins. Rooms have sensors (e.g., temperature) and actuators (e.g., light). Districts group buildings; streets host entities like cars and persons—again, twins. Relationships are the star here, capturing structure and belonging.
- Healthcare: A heart modeled as a twin with heart rate as standardized telemetry. The goal is to make the data discoverable and accessible in a consistent way.
In both, the outcome is structured access, standardized semantics, and strong reuse—rather than a patchwork of one‑off integrations.
Model quality: constraints as enablers
A standout message is that well‑chosen constraints make modeling easier rather than harder. With DTDL’s defined blocks you get:
- lower onboarding overhead (building blocks beat free‑form schemas),
- better tooling (validation and generation are realistic),
- consistent integrations (device/backend/frontend speak the same language),
- fewer surprises in operations (verifiable interfaces, clear change flow through versions).
These traits directly improve the reliability of IoT platforms, especially when many devices and domains converge.
Observability and change: telemetry as your early warning
Mayr stresses that telemetry lights up immediately when a change goes wrong. Hooked up end to end, telemetry makes deviations and error states visible. That yields:
- faster root‑cause discovery,
- fewer blind spots,
- better post‑change forensics via model/instance linkage (DTMI) and explicit versioning.
The Apollo 13 theme echoes here: a solid model (then, a simulator) and good measurements are what make diagnosis and fix validation possible.
Practical takeaways for engineers
Based on Maximilian Mayr’s session, here are the actions we would prioritize:
- Treat DTDL as the contract language between device, backend, and frontend.
- Assign DTMI consistently: readable, versioned, using reversed domains.
- Keep model and instance separate. Ensure instances reference their model explicitly.
- Think in building blocks rather than devices: telemetry, properties, commands, relationships, components.
- Build relationships deliberately: hierarchies (district → building → room → sensor) simplify admin, search, and UI.
- Exploit mocking: define allowed communication and test end‑to‑end without real hardware.
- Generate generic UIs from the model—reduce per‑device special‑case code.
- Watch telemetry closely after changes—it’s your early warning system.
- Plan for versioning as a normal part of development; models will evolve.
Conclusion: simple blocks, big impact
“Digital Twins” by Maximilian Mayr (Tributech) shows how a tightly scoped set of building blocks enables robust, comprehensible digital twins. DTDL provides the language, JSON‑LD the foundation, and DTMI the identity. The Apollo simulator story becomes a general digital pattern: ingest telemetry, send commands, model relationships, and version instances—so that teams across domains can work from the same source of truth.
Whether in manufacturing, healthcare, smart cities, or data economies, the principles remain the same. The most resonant line for us was the analogy that DTDL is “basically like an open API standard just for IoT devices.” That’s exactly why the approach clicks for engineering teams.
To get started, Mayr points to two options: the open‑source “Eclipse D2 project” and Microsoft’s “Asia” platform with a digital twin service. Tool choice aside, the first step matters most: a clear model, a solid DTMI, and disciplined use of telemetry, properties, commands, relationships, and components. From there, the rest follows—model‑driven, traceable, and scalable.