Logo danube.ai

danube.ai

Startup

How danube.ai built: "Eine Upselling-AI als API für Geizhals"

Description

Das Unternehmen danube.at berichtet in einem TechTalk mit devjobs.at darüber, wie sie eine Upselling AI für einen Kunden entwickelt haben.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In How danube.ai built: "Eine Upselling-AI als API für Geizhals", Philipp Wissgott explains a personalized upselling API that avoids manual filters and cookies/history by using a user’s reference product to define a multidimensional similarity sphere, rank items by individualized price-performance, and let users boost attribute weights live (demoed with the Samsung Galaxy M51). Built in two months on a JavaScript/Node.js SDK, Docker, and a worker framework with load balancing, the now highly optimized API handles many concurrent requests; the approach offers transferable ideas such as leveraging implicit signals, modeling similarity spaces, and exposing interactive weighting for e-commerce search and recommendations.

Building an Upselling AI API for Geizhals: Price–Performance, Similarity Spheres, and Lightweight User Control — A DevJobs.at Engineering Recap

What we learned in “How danube.ai built: "Eine Upselling-AI als API für Geizhals"”

At DevJobs.at, we watched “How danube.ai built: "Eine Upselling-AI als API für Geizhals"” by Philipp Wissgott (Co‑Founder & CEO of danube.ai), who also leads the AI algorithms at the company. In this talk, he outlines how danube.ai built a production‑ready Upselling AI as an API that finds personally relevant products without traditional filters, cookies, or browsing history — and how the system optimizes for price–performance rather than raw price.

The key idea: instead of making users click through 20 filters, treat the product they land on as a strong signal — the “reference product” — and construct a “multidimensional sphere” in product similarity space around it. Within that sphere, the AI ranks options with similar or better price–performance and lets users nudge what matters via property “boosts.”

The problem with classic e‑commerce filters — and why price–performance wins

Wissgott starts with a common frustration: finding a good product on many e‑commerce sites takes tedious filter work. But most shoppers don’t care whether a phone costs €399 or €400 — they care whether it’s a good deal for what it offers.

  • Long filter lists demand domain knowledge and effort.
  • “Performance” is personal: one buyer values battery life, another the camera, another model recency.
  • Filter tinkering often obscures relevance instead of surfacing it.

danube.ai’s hypothesis was straightforward yet ambitious: can we build an AI API that proposes the right price–performance product for you, without any manual filters or tracking? The team pitched this to Geizhals — they were excited, and the build began.

The core signal: the reference product

Most sessions begin with a product search; many shoppers land directly on a product page from Google. That entry point — the specific SKU a user is viewing — carries rich preference signals by itself: price range, form factor, display size, model family, and more. danube.ai calls this the “reference product.”

“From this reference product we draw a sphere in the product space. A multidimensional sphere that is our filter.”

This mental model turns filtering into a geometric selection of “nearby” products in a feature space, avoiding the need for users to define dozens of constraints.

The multidimensional sphere: filtering as similarity

Concretely, products are embedded in a feature space where closeness reflects similarity. The reference product sits at the center; a sphere around it captures nearby candidates.

  • No explicit UI filters: the sphere is the operational filter.
  • The sphere’s radius is the tolerance for variation: close enough to feel relevant, not identical.
  • Inside the sphere, candidates are ranked by price–performance.

The output is an upsell set that feels familiar (“near what I searched for”), yet leaves room for better value.

Price–performance as the ranking objective — personalized by context

Within the sphere, the AI evaluates candidates by price–performance. “Performance” is not a fixed checklist; it’s contextual and adapts based on the reference product and category signals. The demo surfaces this through live importance weights per property — percentage values that explain what the AI currently considers most relevant.

A concrete example: for the Samsung Galaxy M51, the AI recognizes its exceptionally long conversation time (“64 hours”), and weights that property strongly in scoring.

  • Transparent importance: percentage weights give a glimpse into the model’s decision logic.
  • Contextual scoring: a property’s importance can rise or fall depending on the reference product.
  • Upselling as better overall value: not merely more expensive, but better price–performance near the user’s intent.

Lightweight personalization: property boosts instead of 20 filter sliders

A standout UX element is the ability to “boost” specific properties. Boosts don’t hard‑filter; they adjust the weighting: “make this criterion more important.”

  • Example 1: Recency. In the demo, “recency” starts at 13%. One click boosts it to 47%, and the top recommendations change immediately.
  • Example 2: Rear camera megapixels. Boosting this property reshuffles the candidates again.

The power lies in minimal interaction for maximal signal. Rather than juggling numerous filters, the user steers the AI through the product space with a few deliberate nudges.

The demo flow: from Samsung Galaxy M51 to OnePlus Nord and Realme X3

Wissgott walks through a concrete path to make the system’s behavior tangible:

  • Reference product: Samsung Galaxy M51
  • AI output: five comparable products with similar or better price–performance, including the Samsung Galaxy A71
  • Explainability: visual cues for where a candidate is better or worse than the reference
  • Visible weighting: left‑side percentage importances, e.g., high emphasis on conversation time for the M51
  • Boost “recency”: from 13% to 47%, immediately shifting the results; OnePlus Nord appears
  • Drill‑down: click into OnePlus Nord to see its similar products
  • New top suggestion: Realme X3
  • Further refinement: boost “rear camera megapixels,” reshuffling the set again

The essence in Wissgott’s words: with very few manual inputs, users can express what matters and navigate to the right product quickly.

Architecture and tools: JavaScript/Node.js, Docker, worker framework, SDK

danube.ai built on their existing foundation:

  • SDK baseline: their in‑house technology was already packaged as an SDK.
  • Primary stack: JavaScript and Node.js.
  • Containerization: “everything dockerized.”
  • Processing and scalability: a worker framework with sensible load balancing, enabling the API to handle many concurrent requests.

From an engineering standpoint, the approach is pragmatic: the AI logic is delivered as an API atop a proven web/worker stack rather than as a monolithic special‑purpose server, which eases deployment and scale‑out.

Team setup and delivery mode: focused roles, broad participation

danube.ai’s team is around ten people, and “almost everyone” contributed to this project:

  • T‑shaped skills with emphasis: some more frontend, some more backend — “even though everyone can do everything.”
  • Ownership by interest: frontend specialists “picked their frontend jewels,” backend specialists took their backend parts.
  • Technical project lead: managed issues and handled acceptance relative to the dev team.
  • Product‑owner‑like role: Wissgott himself led feature direction and communication with Geizhals to verify alignment.

This structure explains how the team finished the Upselling API within two months: broad involvement, clear accountability, tight feedback loops.

From “works” to “blazing fast”: performance as the second act

A candid remark from Wissgott: “The first solution that satisfies the features usually doesn’t have the best performance.” After hitting functional completeness, the team focused on speed — with massive improvements. The contrast to the first version is, in his words, hardly imaginable.

What we take away as engineers:

  • Value first, velocity second: validate product behavior, then optimize hot paths.
  • Measurability and focus: even without numbers in the talk, the message implies systematic bottleneck hunting.
  • Infra matters: worker framework + load balancing + containerization are a solid base for concurrency and scale.

Geizhals was “very excited” about the result — a sign that both UX and technical performance resonated.

UX as a system property: explainability and control without complexity

A defining trait of the demo is transparency. Showing property importance percentages makes the AI’s reasoning tangible. Combined with boosts, users get immediate feedback: adjusting importance leads to instant changes in recommendations.

  • Explainability that matters: not a black box, but visible signals.
  • Immediate control: boosts translate directly into outcomes.
  • Context preserved: staying near the reference product keeps results plausible and relatable.

Conceptual blueprint (without code) to reproduce the approach

The talk doesn’t include code, but the conceptual workflow is clear:

  1. Establish the reference product
  • Use the entry product page (often via search) as the center of personalization.
  1. Define the similarity space
  • Represent products in a feature model that captures meaningful dimensions.
  • Choose a similarity measure so “near” products are genuinely comparable.
  1. Apply the multidimensional sphere as the filter
  • Set a radius (or threshold) to include close neighbors.
  • Produce a candidate set within the sphere for ranking.
  1. Implement price–performance scoring
  • Combine price with performance features in a ranking function.
  • Make property importance contextual to the reference product.
  1. Expose explainability signals
  • Display importance weights for key properties.
  • Highlight where candidates are better/worse than the reference.
  1. Add the boost mechanism
  • Allow user‑driven weight adjustments (e.g., recency, camera, battery life).
  • Recompute rankings immediately after boosts.
  1. Make it production‑ready
  • API layer in JavaScript/Node.js (as described in the session) or equivalent.
  • Containerize (Docker) for reproducibility.
  • Use a worker framework and load balancing to handle high concurrency.
  1. Iterate for performance
  • After functional validation, identify bottlenecks.
  • Apply caching, parallelization, or pathway simplifications as needed.

These steps mirror the principles described in the session without going beyond the provided information.

Risks, boundaries, trade‑offs — and how the approach addresses them

Without introducing new facts, some general trade‑offs emerge from the talk:

  • No user history means less long‑term context. The reference‑product approach compensates with a strong initial signal and lightweight boosts while intentionally avoiding tracking.
  • Defining a meaningful “sphere” requires solid feature modeling. danube.ai’s existing SDK foundation suggests this groundwork was already in place.
  • Explainability vs. complexity: exposing weights is a simple yet powerful way to build trust without overwhelming users.

Two months to an API: why that timeline held

Wissgott highlights a two‑month delivery window. Enablers from the session:

  • An existing SDK base
  • A focused scope (upselling within a similarity sphere, price–performance objective)
  • Broad team involvement with clear roles (technical project lead, product‑owner‑like customer alignment)

For engineering teams, this is a replicable pattern: a clear product core, a reusable platform, and tight validation cycles enable novel ideas to reach production quickly.

Quote highlights

  • “Our filter is not manual price ranges — it’s a sphere in the product similarity space.”
  • “The first solution that satisfies the features usually doesn’t have the best performance.”
  • “It’s a good feeling to know you solved a problem for the first time.”

These lines capture the project’s conceptual clarity, candid engineering mindset, and pioneering quality.

Engineering takeaways you can apply

From “How danube.ai built: "Eine Upselling-AI als API für Geizhals"” by Philipp Wissgott (danube.ai), here are the core lessons for practitioners:

  • Use the reference product as the starting signal instead of collecting user history.
  • Treat filtering as geometry: a similarity sphere yields coherent candidate sets.
  • Rank by price–performance, not price alone.
  • Increase trust with visible importance weights and side‑by‑side property deltas.
  • Favor a few strong user interactions (boosts) over many filter toggles.
  • Ship the product first, then invest in performance optimization.
  • Build on practical infra: SDK reuse, JavaScript/Node.js, Docker, worker framework, load balancing.
  • Organize for speed: broad team engagement, clear responsibilities, and close customer validation.

Conclusion

“How danube.ai built: "Eine Upselling-AI als API für Geizhals"” with Philipp Wissgott presents a crisp alternative to filter‑heavy catalog UX: a similarity‑space mindset where an AI ranks for price–performance, and a few boosts provide meaningful, low‑effort personalization. Backed by a pragmatic JS/Node stack, containerization, and a worker framework, danube.ai shipped the API in two months and subsequently pushed performance significantly.

For engineers, the message is clear: better recommendations come from better signals, not more knobs. Making the reference product the central signal and exposing the AI’s weighting makes upselling not just smarter, but more understandable — exactly the kind of product intelligence that resonates in e‑commerce.

More Tech Talks