Logo Elektrobit

Elektrobit

Established Company

Introduction and Trends in Automotive Software

Description

Dr. Georg Gaderer von Elektrobit gibt in seinem devjobs.at TechTalk einen Überblick zum Thema Automotive Software und spricht über zukünftige Trends in diesem Gebiet.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In “Introduction and Trends in Automotive Software,” Dr. Georg Gaderer (Elektrobit) outlines the megatrends—automated driving, electrification, connectivity, and shared mobility—and why cars are safety‑critical software systems at massive scale (>100M vehicles running EB software). He contrasts AUTOSAR Classic (real‑time, C, signal‑based, very short boot) with Adaptive AUTOSAR on POSIX (service‑oriented over IP/Ethernet, longer boot, currently lower safety levels) and explains quality practices such as MISRA/ISO 26262 compliance, over a million daily tests, and static code analysis. He then details vehicle HPCs with virtualized multi‑OS stacks, hardware accelerators for communication isolation and time sync, and the shift from domain to centralized to zonal architectures—guidance engineers can apply to function partitioning, platform selection, and designing secure, updatable systems.

From Domain Networks to HPC and Adaptive Autosar: Introduction and Trends in Automotive Software with Dr. Georg Gaderer (Elektrobit)

Why this session matters

At DevJobs.at, we tuned into “Introduction and Trends in Automotive Software” to understand how today’s vehicles are becoming software-defined machines. In this session by Dr. Georg Gaderer (Speaker) from Elektrobit (Company), the technical story is unambiguous: more compute, more functions, and uncompromising safety shape the way automotive software is designed, built, and validated.

“Product quality is most important.” The talk revolves around this: automotive software must work correctly and safely—because mistakes can have real-world consequences.

Below is our engineering-focused recap: ECUs and zones, Classic vs. Adaptive Autosar, high-performance computers (HPCs), hardware acceleration, and the architectural shift from domain to centralized to zone controllers. We stick strictly to the content of the session and pull out actionable takeaways for practitioners.

The backdrop: Elektrobit and four industry megatrends

Dr. Gaderer starts by positioning Elektrobit: a global automotive software supplier, 100% owned by Continental yet independent as a company, with about 3,500 people and a focus on automotive-grade software. The deployment footprint is striking—over 100 million vehicles on the road run Elektrobit software, totaling roughly one billion embedded devices.

He frames the industry around four major streams:

  • Automated driving
  • Electrification (lower CO2 footprint)
  • Connectivity (cloud/infrastructure-driven services and features)
  • Shared mobility

These megatrends push architectures forward: higher compute capacity, tighter integration between software and hardware, and new networking and safety requirements.

ECUs in context: performance boxes, sensors, actuators, and integration/control

To reason about software trends, it helps to understand the ECU landscape inside a vehicle:

  • Performance/computation units: High-powered compute boxes, such as a “car server,” for intensive workloads.
  • Sensors: From cameras to vehicle speed and steering wheel speed sensors; modern cars include dozens to hundreds of them.
  • Actuators: They turn commands into action—e.g., switching the brake light.
  • Integration/control units: Placed in “zones” within the car, these units integrate and control local functionality. The zone concept becomes important later.

All these components must coordinate deterministically and reliably, with clear communication models, under safety constraints.

Smartphone vs. car: similar idea, different world

The “software-defined car” mirrors the app-driven expansion we know from smartphones—but the differences are substantial:

  • Smartphone: One microprocessor, one display, one OS, a handful of sensors, very light, no vehicle-level speed.
  • Car: Hundreds of microcontrollers, multiple displays, multiple OSes, hundreds of sensors, roughly two tons, and potentially 250 km/h.

The implication is immediate: the automobile is safety-related. Software must start promptly, behave deterministically, and be safe at all times. The cost of failure is not a crash dialog—it’s a crash in the real world.

Safety first: standards, requirements, massive testing

How do you produce automotive software under these constraints? The talk calls out several pillars:

  • Strict programming rules: MISRA-C and MISRA-C++ with safety and security amendments to prevent common pitfalls (e.g., overflows, divide-by-zero).
  • Additional requirements: internal requirements at Elektrobit and OEM (car manufacturer) requirements, including initiatives like Herstellerinitiative Software.
  • Standards: including ISO-26262 and further named standards like “IC-6108” or “DU-178C” (from the aerospace domain).
  • Risk management: a constant, explicit measurement of risk to manage it effectively.
  • Large-scale testing: over one million tests run daily, complemented by static code analysis with sophisticated tools to catch issues such as overflows or divide-by-zero.

The aim is simple and strict: do the right thing right—both functionally and safely.

Classic vs. Adaptive Autosar: two complementary layers

A central section of the talk contrasts Autosar Classic with Autosar Adaptive.

Autosar Classic: RTOS, C code, short boot times

  • Stack: C-programmed software, a real-time OS, typically delivered entirely by the supplier—thus under full control.
  • Safety: Suitable for safety-related areas due to determinism and tight control.
  • Communication: Signal-based (e.g., CAN, LIN, FlexRay) or service-based; in all cases designed for robust, real-time behavior.
  • Hardware: Microcontrollers with specialized hardware.
  • Boot time: Extremely short. Think of the brake light—you can’t afford multi-second startup delays.

Autosar Adaptive: POSIX, service orientation, IP/Ethernet

  • Stack: A POSIX OS (Linux, QNX, or Android) underneath, with automotive software on top.
  • Safety: As of today, not beyond ACLB, yet it still must be secure.
  • Communication: Purely service-oriented over IP/Ethernet; no signal-based buses like CAN attached to these devices.
  • Boot time: Seconds-scale boot is acceptable for certain components (e.g., head units), though reaching ~2 seconds is still challenging for Linux.

In practice, vehicles combine both: strict real-time/safety-critical functions remain in Classic, while service-oriented and compute-heavy features land in Adaptive, each with the boot and communication models that fit.

Second trend: High-performance computers (HPCs) as vehicle servers

With the emergence of vehicle servers (HPCs), complexity ramps up. Dr. Gaderer illustrates this through the growth in lines of code:

  • Older body controllers: roughly 200,000 to 2 million lines of code.
  • Navigation systems: around 4 million lines.
  • High-end infotainment: around 10 million lines.
  • Today’s high-performance controllers: around 20 million lines—and still rising.

Technically, this looks like:

  • Multiple operating systems on a single SoC: a real-time OS (Classic), multiple POSIX OSes (Linux, Android, QNX), and typically a security OS.
  • Virtualization: These OSes run as virtual machines on top of a hypervisor.
  • Boot orchestration: Multiple boot managers to bring the whole system up correctly.
  • Updates: The boxes must be updatable without breaking them—another engineering challenge intertwined with boot and virtualization.
  • Hardware footprint: 2–4 processors totaling up to 28 cores.

For engineers, that means mixed-criticality designs with strict isolation, curated communication, and update-safe boot chains become standard design problems.

Third trend: Hardware acceleration for communication (COM accelerators)

Dr. Gaderer points to an example MCU, the NXP S32G (the general ideas apply broadly). The diagram distinguishes classic and adaptive controller cores—and introduces hardware COM accelerators.

Why this direction?

  • Offloading core work: Communication handling can be moved to dedicated hardware instead of burdening general-purpose cores.
  • Separation: Misbehaving software in one OS cannot affect the communication of others—hardware enforces separation.
  • Firewalling: The accelerator can implement controlled, segmented access to communication paths.
  • VM independence: Virtual machines remain decoupled—even if one VM runs a braking controller and another hosts an Android app.
  • Gateway functions: Box-to-box communication remains possible in a controlled way.
  • Clock synchronization: Establish a common notion of time across the system.

The engineering takeaway is clear: put safety- and real-time-critical communication on dedicated hardware paths. It improves determinism, safety, and security.

Architecture shift: from domain to centralized to zone controllers

The talk outlines an architectural evolution:

1) Domain architecture

  • Each domain (e.g., brakes, body control, infotainment) has its own network.
  • Gateways connect the domains.
  • Drawback: In a real car, this means multiple parallel wiring harnesses—inefficient and complex.

2) Centralized architecture

  • Introduces a centralized HPC (a vehicle server) as the core compute unit.
  • This phase coincides with the introduction of Adaptive Autosar and Ethernet into the car.
  • Benefit: a single, centralized backbone, reducing redundant wiring.

3) Controllers per vehicle region (“zones”)

  • Place a control/integration box at each region of the car to manage local functions and enable cross-zone communication.
  • Localize traditional fieldbuses (CAN, LIN, FlexRay) within zones while preserving the benefits of distributed control.

This transition moves from isolated domains to a centralized backbone and then to structured zones—built on a unified network fabric.

Boot times, safety levels, communication models: design rules of thumb

The session conveys several decision anchors:

  • Boot time dictates the stack: If you need instantaneous behavior (e.g., brake lights), Classic/RTOS is the right layer. For comfort features, seconds of boot time may be acceptable.
  • Safety level vs. OS stack: High safety demands remain the domain of Classic. Adaptive is not beyond ACLB (as presented) and must still be secure.
  • Communication strategy: Signal-based paths (CAN, LIN, FlexRay) live in Classic; service-oriented, IP/Ethernet-based communication lives in Adaptive. HPCs must both isolate and bridge these worlds.
  • Virtualization required: Multiple OSes on one SoC implies hypervisors, multiple boot managers, and carefully designed update procedures.
  • Use hardware offload: Move communication workloads to accelerators to reduce core load, interference risk, and latency.

These are not abstract principles—they directly respond to the design pressures and patterns highlighted in the talk.

Testing and analysis at automotive scale

“Over one million tests running daily” isn’t a vanity metric; it signals the operational scale required to ensure correctness:

  • Broad functional coverage, including interfaces and regressions.
  • High automation and reproducibility.
  • Static analysis to catch critical classes of defects (e.g., overflows, divide-by-zero) early.
  • Risk-driven testing, prioritizing safety-critical paths.

For teams, this means tooling, infrastructure, and a clear definition of done are as critical as writing code. Safety is designed in and validated continuously.

Actionable takeaways for engineers

Based on Dr. Georg Gaderer’s session (Elektrobit), we distilled practical guidance for system and software engineers:

  • Specify requirements early: Boot/startup targets, safety levels, and communication paths guide the split between Classic and Adaptive and the use of hardware acceleration.
  • Secure mixed criticality: Partition SoCs with hypervisors; isolate safety-critical control from less critical workloads—both in software and in hardware.
  • Embrace service orientation deliberately: IP/Ethernet in Adaptive brings flexibility but demands security-by-design and robust update strategies.
  • Localize fieldbuses: Constrain CAN/LIN/FlexRay within zones; bridge via gateways and COM accelerators in a controlled fashion.
  • Plan for time: Establish a coherent time base across VMs and zones to enable deterministic distributed behavior.
  • Scale testing: Continuous testing and static analysis are mandatory; “over a million tests daily” is the operational magnitude modern automotive projects require.
  • Respect the standards: MISRA-C/C++, Herstellerinitiative Software, ISO-26262, and the additional standards cited (“IC-6108”, “DU-178C”) provide the non-negotiable framework.

Each of these points follows directly from the talk’s content—no assumptions added.

Conclusion: Challenging—and exactly why it’s exciting

“Introduction and Trends in Automotive Software” by Dr. Georg Gaderer (Elektrobit) shows how computing trends—Adaptive/service orientation, HPC/virtualization, and hardware acceleration—enable megatrends like automated driving and connected services. At the same time, system complexity surges: wiring harnesses, ECUs, boot managers, hypervisors, safety levels, and networks form a tightly coupled system that must work correctly end to end.

We leave with three headline trends:

  • Performance compute complements real-time cores in today’s systems.
  • High-performance computers (HPCs) as vehicle servers raise both capability and complexity.
  • Hardware accelerators offload communication and enforce strong isolation.

Automotive software engineering is demanding—and deeply rewarding. With standards, tests, and crisp architectural boundaries, teams can “do the right thing right” and deliver the safe, connected, feature-rich vehicles that define the next generation of mobility.

More Tech Lead Stories

More Dev Stories