Workplace Image WKO Inhouse

KI in der Software Entwicklung

Description

Dominik Amon von WKO Inhouse demonstriert in seinem devjobs.at TechTalk mit einer Live Coding Session wie KI im Development Alltag eingesetzt werden kann und beleuchtet die Vor- und Nachteile.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In KI in der Software Entwicklung, Dominik Amon (WKO Inhouse) offers a pragmatic view of AI’s impact on software engineering, including a live demo in Microsoft Visual Studio 2022 where GitHub Copilot generates a working decode/decompress method with minimal effort, illustrating gains in understanding, onboarding, and debugging. He contrasts bold replacement claims with real-world outcomes and warns about vibe-coding, hallucinations, and average-quality code from generic prompts, as well as security and accountability risks. His takeaways: use AI for repetitive tasks, enforce automated static code analysis (STA-Tools such as SonaQ) and a strict four-eyes review for AI-generated changes, and set realistic project expectations; he values Copilot at about 20 Euro per developer per month.

AI in Software Development: A grounded take from Dominik Amon (WKO Inhouse) on Copilot, risk, and responsibility

Setting the stage: Hype, dystopia, and everyday code

In “KI in der Software Entwicklung” by Dominik Amon (WKO Inhouse), an experienced full‑stack developer offers a pragmatic lens on how AI is reshaping software work. At DevJobs.at, we listened closely to what actually changes in daily engineering, what remains firmly in human hands, and how teams should respond.

Amon begins with the louder voices that now echo across engineering orgs:

  • Claims that by 2025 “many mid‑level engineers” could be replaced.
  • A CEO aspiring to create technology so that we “no longer need to program.”
  • The bold stance of hiring “no developers in 2025” because AI will drive such efficiency.
  • A popular risk website that rates “Computer Programmers” at 67% High‑Risk.

The big question: Is programming truly at risk of being automated away—or are we witnessing a productivity wave that requires new disciplines? Rather than speculate, Amon reaches for a live demo.

The demo: GitHub Copilot in Visual Studio 2022—flipping logic the other way

Amon opens Microsoft Visual Studio 2022 with GitHub Copilot enabled. The starting point is intentionally modest: a method that encodes and compresses a string and returns the output. The task: reverse it—decompress and decode back to a human‑readable string.

The workflow is standard for modern AI‑assisted coding:

  • Inline suggestions appear as he types; Copilot recognizes the pattern and proposes matching snippets.
  • He also queries Copilot directly (“create a decode method”), referencing the existing code.
  • Copilot responds with a concise implementation that inverts the steps: Base64 back to bytes, decompress, return as string.
  • Amon inserts the method, adds the call, and runs it. Result: the decoded string appears correctly.

The point is deliberately unsensational: AI can help “think in reverse,” remove boilerplate, and nudge you toward a correct implementation with just a few clicks. He still validates the output and ensures the suggestion fits the surrounding code.

From expectation spikes to sober reality

Amon frames the broader debate with a long‑view: for years, the perceived likelihood of software jobs being automated hovered around 50%. In 2022/2023, it spiked to 73%—then declined again. Why?

  • A public‑sector pilot: Australia’s finance ministry tested AI for 14 weeks. The “high expectations” were not met; for complex tasks, AI “did not really help.” Usage started high and then dropped sharply.
  • A take from the Microsoft sphere: AI is “not a game‑changer,” but “very helpful.” That aligns with what we observe in many teams.
  • Another reported effect: AI can hinder critical thinking. Amon calls out “vibe‑coding”—blindly adopting suggestions that sound plausible.
  • Meanwhile, a Microsoft voice responsible for AI productivity contradicts the message that one should stop studying IT: “We will still need programmers,” only “far more efficient.”

For us at DevJobs.at, these points fit together: AI accelerates, but it doesn’t auto‑produce quality. It lightens the load, but doesn’t replace engineering judgment. And it raises the stakes for process, review, and responsibility when code reaches production faster.

Where AI actually helps in software work today

Amon distills practical advantages from his daily experience. We saw them in the demo and his examples:

  • Explaining code: “What does this do?”—AI clarifies unfamiliar or older code, including gnarly bits.
  • Demystifying regular expressions: It’s consistently helpful to ask, “What exactly is this regex doing?”
  • Faster onboarding to new environments: If you’re new to, say, smartphone apps, AI support shortens the ramp‑up.
  • Conversational problem‑solving: Instead of stewing on your own, use AI as a dialog partner about solution paths.
  • Debugging support: Amon describes being stuck for two hours and then getting a hint to the “right spot” from AI. He still had to implement the fix—but the detour ended.
  • Efficiency by cutting repetitive typing: A large chunk of coding is repetitive. AI removes “monkey work,” letting people focus on solution design.

The common denominator: AI amplifies effectiveness where context and engineering judgment already exist. It does not relieve us of responsibility; it shifts focus toward evaluation, architecture, and safeguards.

Risks and pitfalls: Why “vibe‑coding” can turn expensive

Amon is candid about the failure modes. Four stand out.

1) Vibe‑coding: Plausible does not mean correct

When suggestions “sound good,” the temptation is to accept them without architectural or domain validation. That can work in a demo, but it’s risky in business apps. Wrong assumptions creep in quietly, reviews get lax, technical debt compounds, and the “why” behind decisions fades.

2) Hallucinations: Methods that don’t exist

Amon shares a concrete example. While building a smartwatch app, AI proposed a method to make the watch vibrate. The name sounded right—but the method didn’t exist. Reading the documentation finally cleared it up. AI can be convincingly wrong. If you don’t catch that, you waste time—or ship defects.

3) Median code from training data

If models are trained on GitHub code, there’s “a lot of good code, but also mediocre or poor code.” Unless you steer precisely, the output tends toward the median. If you want high‑quality code, you must make quality criteria explicit rather than drifting with generic output.

4) Vague prompts produce functional but unsafe solutions

“Works” is not enough. Business systems require explicit assurances: security, performance, reusability. If prompts are too generic, these attributes are often omitted. The code runs—but may be easy to hack, scale poorly, or embed unmaintainable patterns.

Accountability: Who owns a KI‑induced incident?

Amon raises the tough question: If a security issue hits production, who is responsible?

  • The AI that suggested the code?
  • The developer who accepted it?
  • The review process that failed to catch it?

With AI‑assisted code, precious time may be lost just to understand the reasoning behind a change. That heightens the need for clear processes, auditability, and quality gates—before anything ships.

Practical guardrails: Making AI safe and useful for teams

From Amon’s guidance, we draw operational practices that teams can adopt immediately.

1) Use AI where it reliably removes repetitive work

Target repetitive, pattern‑heavy tasks: standard conversions, boilerplate, recurring code structures. That’s where AI provides immediate time savings without overriding architectural choices.

2) Enforce automated checks

“AI‑generated code should definitely be automatically checked.” Concretely: Static Code Analysis (STA tools) such as “SonaQ” for quality and security. Shift control left—catch issues earlier, lighten the load on human reviews.

3) Keep the two‑person rule non‑negotiable

Best practice already says: a second person must approve code. For AI‑generated changes this is even more important. If AI opens a pull request, a human must verify it aligns with company standards. No merge without review.

4) Mentor newcomers and call out vibe‑coding

Entry‑level developers need intentional guidance: When does AI truly help? When are documentation, code comprehension, and architectural thinking more important than a quick suggestion? Amon warns that blind trust can lead to costly defects—including security issues.

5) Manage expectations at the project level

“AI helps our efficiency, but it doesn’t solve every problem.” This message must reach project management. If you prematurely cut hiring or bank on “magic,” you risk producing large volumes of unusable or unmaintainable code—only to rewrite it later at high cost.

6) Evaluate ROI realistically

Amon names a concrete price point: GitHub Copilot at roughly 20 euros per developer per month. His verdict: “fair” for what it currently delivers. He wouldn’t pay more right now. That’s a clear directive: measure value, don’t assume it.

Technical takeaways from the demo: Patterns, context, and tests

A small demo—big lessons for everyday engineering:

  • Inversion patterns come quickly with AI: encoding/compression vs. decoding/decompression. If you name the right counterparts (e.g., Base64 decode), suggestions get useful fast.
  • Tests remain non‑negotiable: Amon adds the call, checks the output—only then is the suggestion “good.” Without tests, a plausible mistake sails through.
  • Feed context deliberately: The quality of Copilot’s answer visibly depended on the surrounding code. The reference provided the skeleton. Good prompting context is part of the engineering effort.
  • Human judgment is the final gate: Amon evaluates the suggestion, integrates what fits, discards the rest. This “gatekeeping” is what makes AI helpful instead of hazardous.

A compact checklist to operationalize Amon’s guidance

  • Aim AI at repeatable, standardized tasks first.
  • Make quality attributes explicit in prompts when relevant: security, performance, reusability.
  • Integrate STA tools (e.g., “SonaQ”) into CI for automated checks.
  • Enforce the four‑eyes principle for all PRs—including AI‑generated ones.
  • Invest in knowledge building: pair juniors with experienced colleagues, discuss “vibe‑coding.”
  • Set expectations with management: AI boosts efficiency but doesn’t replace developers.
  • Review costs vs. value regularly: assess tools like Copilot in retrospectives.

What we learned from “KI in der Software Entwicklung”

Dominik Amon (WKO Inhouse) rejects both extremes. He shows how AI changes software work without devaluing the profession.

AI is not a game‑changer, but it is very helpful.

That framing fits the entire session: If you treat AI as an accelerator, you win. If you treat it as a substitute for thinking, architecture, and responsibility, you lose—time, quality, and trust.

Amon encourages competent use: ask AI to explain code, lower onboarding barriers, speed up debugging, and automate repetitive typing. And he doubles down on what remains essential: quality gates, security thinking, peer reviews—and the ability to distinguish plausible from correct.

Closing: Yes to efficiency, no to outsourcing responsibility

“AI is here to stay.” This thread runs through Amon’s talk. The path forward is clear: define where AI sensibly relieves workloads; backstop the output with STA tools and reviews; invest in awareness—especially for newcomers. Keep project expectations realistic. And align budgets with demonstrated value.

Work this way and you combine the best of both worlds: human judgment and machine acceleration. That’s the realistic, productive core of “KI in der Software Entwicklung,” made tangible in a simple Visual Studio demo—and in the guardrails Amon urges teams to adopt.

More Tech Lead Stories

More Dev Stories