danube.ai
Reinventing AI
Description
Philipp Wissgott von danube.ai beschäftigt sich in seinem devjobs.at TechTalk mit dem Gedanken, was man sich eigentlich von Suchmaschinen erwartet und zeigt einen Weg, wie entsprechende Ergebnisse ermittelt werden könnten.
By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.
Video Summary
In Reinventing AI, Philipp Wissgott (danube.ai) argues that “decision engines” are the post-search paradigm, noting that Google largely filters information with coarse personalization and forces users to become domain experts. He details danube.ai’s approach: a non–machine-learning, multidimensional sorting algorithm that mirrors human decision steps (pros/cons, weighting, ranking), builds a per-request, per-user AI on the fly, and personalizes from the first click via user-behavior similarity rather than content similarity; users can keep data local. For practitioners, the takeaway is how to design choice workflows that plug into existing data, yield immediately actionable, highly personalized results (e.g., scheduling test drives), and avoid big-data training phases.
Reinventing AI: From Search to Decision Engines – Technical Insights from Philipp Wissgott at danube.ai
Why “Reinventing AI” matters now
In “Reinventing AI,” Speaker Philipp Wissgott (danube.ai) argues that the next leap after two decades of search technology is not “better search,” but a different paradigm: decision rather than discovery. His framing is straightforward: search engines are powerful filters across immense datasets, yet they stop short of helping us decide. danube.ai positions itself exactly in that gap with a decision engine built on a multi‑dimensional sorting algorithm—explicitly not machine learning.
From our DevJobs.at vantage point, the talk doubles as an architectural critique and a blueprint. It asks engineers to map what humans actually do when deciding—collect memories, filter, list pros and cons, weight them, and rank options—and to implement the parts that software can perform without replicating the entire web.
What made Google win—and where filters fall short
Wissgott starts with a brief history. Before Google, Yahoo and AltaVista dominated. Google’s inflection came from a hierarchical algorithm that made the web tractable at scale. That victory was mostly about data management, indexing, and efficient ranking—less about “intelligence” in the human sense.
The strengths of classic search:
- It improves with usage: “Google gets better with every search.”
- It personalizes, albeit coarsely—location, for instance, when you search for a car, surfacing nearby dealerships.
- It packs countless microservices that speed up everyday tasks (Maps, Calendar, Search), compounding small time savings across billions of users.
The limits:
- The search field and results are essentially a filter to handle an enormous corpus.
- Page one is everything; few of us look beyond it.
- For consumer questions, engines nudge users to become mini‑experts (TVs, laptops, cars) to make sense of results, which doesn’t scale in time or attention.
Bottom line: there’s a wide gap between “find information” and “make a good decision.” That is the opening for decision engines.
The decision engine vision in practice
Wissgott grounds the concept in a practical scenario. Not “five tips to buy a car,” but: “Which car should I buy?” The answer should be a handful of well‑matched, actionable options.
- Personalization leverages your preferences: lifestyle, typical vacations, favored brands, even color choices.
- Output is compact: “five results” you can test drive “in one hour” at a shop around the corner.
- Semi‑automation: the engine filters and recommends; the human makes the final call.
Data privacy is part of the design. As Wissgott puts it, users should choose where their data lives:
- Keep everything local: stored in a cookie or your browser’s local storage.
- Or create an account—optional, not required.
The pairing of immediate personalization with user‑controlled data is a deliberate engineering constraint—and a strong differentiator.
How humans decide: a software blueprint
To automate decision‑making, danube.ai mirrors a simple model of human cognition:
1) Collect all memories.
2) Filter irrelevant material.
3) Build a pro and con list.
4) Weight pros and cons.
5) Sort options by personal costs and personal value.
One key detail: danube.ai currently focuses on steps 3–5. Steps 1–2—collecting and filtering vast inputs—are what Google mastered at planetary scale over the past two decades. In today’s customer deployments, the data typically comes from the customer (pre‑processed), and danube.ai performs the pro/con derivation, weighting, and ranking.
For engineering teams, the takeaway is liberating: you can deliver decision support without owning the full upstream data acquisition problem. A curated input set suffices if steps 3–5 are robust and personal.
Not machine learning: a different philosophy and lifecycle
Wissgott is explicit about the divergence from ML:
“Danube is a multi‑dimensional sorting algorithm that has nothing to do with machine learning.”
What follows from that?
- No centralized trained model. Classical ML aggregates data on a central server, trains models, and serves predictions from that learned state.
- Personalization on demand:
“The artificial intelligence is actually going from the server to you … and is created at the very moment when you press the button.”
- No fixed training window. ML often splits training and live phases; here, the “AI” is spun up per request from your personal profile.
This reassigns complexity from “offline training + global inference” to “online instantiation + individual weighting.” It has implications for latency, scaling, and data flow: you trade global stability for per‑interaction adaptability.
Behavior over content: the Netflix illustration
Wissgott contrasts common content‑similarity recommendations with how people actually watch. If you watch a cooking show, most engines propose more cooking shows. But your next choice might be the new action movie everyone talks about—something content metrics might not capture.
The danube.ai route:
- Look only at user behavior and user similarities.
- Build a personal profile from many small similarity edges (“I’m 5% similar to person X, 5% to person Y …”).
- Create “the personal artificial intelligence just for you” from that profile to steer the next suggestions.
Crucially, this approach does not depend on Big Data:
“We don’t need big data … it works after the first click, and personalization gets better exponentially with every click.”
For practitioners, the shift is clear: recommendations can start with minimal interaction data and hone in fast without relying on content features.
Multi‑dimensional sorting: what the talk tells us
While the talk doesn’t unpack formulas, the functional properties of the algorithm are discernible:
- Multiple dimensions of evaluation: costs, personal value, and weighted pros/cons define axes of a scoring space.
- Weighted aggregation: pros and cons don’t carry equal weight; they’re scaled by personal relevance.
- Sorting over classification: the goal is a ranked shortlist of actionable options, not a static category assignment.
- Runtime instantiation: on every click, the decision matrix is personalized and applied to the current candidate set.
This design is especially suited when the “last mile” action is well defined (e.g., “book test drive”) and the list must remain small and highly relevant.
Data control and privacy as architectural choices
Personalization need not compromise sovereignty. The proposed model lets users keep data locally (cookie/local storage) or opt into an account. From an engineering perspective, this encourages short‑lived, edge‑instantiated profiles and avoids persistent, centralized stores by default—yet still enables deep personalization per session.
Building blocks for steps 3–5
Even without API diagrams, the operational layers are implicit. Teams aiming to build analogous engines can think in terms of:
1) Candidate intake (upstream): data provided and pre‑filtered by a partner or customer system (human model steps 1–2).
2) Pro/con derivation: transform user behavior and context into benefit/risk signals.
3) Weighting: scale by personal costs and personal value functions.
4) Multi‑dimensional sorting: rank candidates into a concise shortlist.
5) Action handoff: move to the next concrete step (e.g., schedule test drive), preserving semi‑automation.
Per the talk, danube.ai currently concentrates on 2)–4).
Semi‑automation: reducing complexity without taking control
A notable design stance is the choice of semi‑automation. The engine narrows choices and raises relevance while leaving final authority with the user. This shapes:
- Responsibility: the human remains the decision maker.
- Trust and UX: small, high‑quality shortlists reduce cognitive load without forcing outcomes.
- Error handling: semi‑automation is more resilient to individual misjudgments than full automation.
For personal consumer decisions, this balance feels appropriate and pragmatic.
Personalization from the first click
The claim that it “works after the first click” rests on the use of behavioral similarities rather than content features. Minimal interaction data can seed a meaningful profile that improves rapidly. In practice, that means:
- Capture the smallest viable interaction signals (the talk centers on selections/clicks).
- Continuously refresh a personal profile built from many tiny similarity links.
- Apply that profile as a weighting mechanism in the sorter.
The “exponential” improvement highlights that each choice feeds back immediately into the next—without waiting for global model retraining.
Constraints and open work: steps 1–2 and product availability
Wissgott is candid about current scope. The consumer‑facing variant is “a little bit future.” Today, deployments run behind the scenes for customers. Also, steps 1–2 (collection and relevance filtering) are acknowledged as substantial and not in scope to re‑implement wholesale. The focus is deliberately on delivering value where steps 3–5 enable decisions.
For engineering roadmaps, that clarity matters: meaningful decision support can be shipped without owning the entire data universe.
Engineering takeaways from “Reinventing AI”
For teams building decision support systems, the talk yields concrete guardrails:
- Start from the human decision workflow: pro/con, weighting, personalization.
- Decouple data acquisition from decision logic: it’s viable to leverage upstream systems for steps 1–2 and concentrate on 3–5.
- Think sorting, not just retrieval: aim for small, actionable shortlists.
- Prefer behavioral similarity over content features to unlock personalization immediately.
- Instantiate per interaction: move intelligence “from the server to the user,” aligning privacy with personalization.
- Choose semi‑automation intentionally: reduce choice overload, keep agency with the user.
These aren’t presented as a checklist in the talk, but they follow directly from the technical narrative.
Quotes and lines that encapsulate the approach
A few statements from Wissgott that distill the architecture:
“Google is a filter … it helps you find, but it doesn’t help you decide.”
“Danube is a multi‑dimensional sorting algorithm … nothing to do with machine learning.”
“The artificial intelligence is created at the very moment when you press the button—and it’s only for you.”
“We don’t need big data … it works after the first click … personalization gets better exponentially.”
They mark a shift from the ML‑first mindset toward a weighting‑and‑sorting engine tuned to human decision‑making.
Outlook: decision over search as the next step
The thesis is crisp: after the search era comes the decision era. That doesn’t diminish the achievement of web‑scale indexing and ranking; it reframes the value proposition around “What fits me right now?” rather than “What exists out there?”
The car example—with five options ready to test drive nearby and appointments booked—forces technical, UX, and data decisions to align to action. It also pairs privacy choice with personalization by design.
Closing: what we learned from “Reinventing AI”
Our DevJobs.at conclusion from Philipp Wissgott’s (danube.ai) session: to take decision support seriously, consider moving away from heavy, pre‑trained global models toward per‑interaction, multi‑dimensional sorters that encode personal weights and pro/cons. That shift decouples personalization from centralized Big Data and aligns with how humans operate: gather context, weigh trade‑offs, rank, and decide.
The architecture is intentionally sober: no training cycles, no content features, no monolithic model. Instead: behavior, similarities, weights—and a shortlist that invites action. That is why “Reinventing AI” sticks: less new magic, more faithful translation of human decision mechanics into software.