hilarion 5
Wird KI meinen Job ersetzen?
Description
Daniel Kreiseder von hilarion 5 nimmt in seinem devjobs.at TechTalk das aktuelle Thema von künstlicher Intelligenz aus Sicht der Software Entwicklung unter die Lupe.
By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.
Video Summary
In "Wird KI meinen Job ersetzen?", Daniel Kreiseder (hilarion 5) examines how generative AI is reshaping software work—covering ChatGPT and GPT‑3 vs GPT‑4, explosive adoption, strong text use cases (translation, drafting, summarization), and concrete limits like hallucinations (a fabricated article “translation” without fetching the URL), plus image generators/deepfakes and the Big Tech race (Microsoft+Bing/OpenAI, Google I/O, Amazon). For developers he demos tools such as GitHub Copilot (context-aware completions, upcoming chat/unit tests), TabNine, CodeWhisperer, and the Find AI search, contrasting where each succeeds or fails (GPT‑3 gave a wrong shell command, GPT‑4 better; ChatGPT produced invalid CloudFormation YAML while Copilot got it right). He concludes AI won’t replace programmers but will augment them; viewers can immediately apply this by selecting the right tool for the task, verifying outputs, and leveraging AI to speed up coding, communication, and research.
AI in the Developer’s Daily Work: What We Learned from “Wird KI meinen Job ersetzen?” by Daniel Kreiseder (hilarion 5)
The framing question
Daniel Kreiseder from hilarion 5 opened with a straightforward, pressing question: Will AI replace my job—or make it better? His session steered clear of sci‑fi and focused on real workflows: which models and tools actually help today, where they fail, how Big Tech sets the tempo, and what it all means for engineering teams. We at DevJobs.at followed closely with an engineer’s lens.
The vibe was intentionally pragmatic. Since November 30, 2022—the public launch of ChatGPT—generative AI has become part of the everyday toolset and dinner‑table conversation. We’re in a gold‑rush moment, but the work still needs to ship.
What ChatGPT is—and what it isn’t
Kreiseder demystified the basics: GPT stands for “Generative Pre‑Trained Transformer,” a language model trained on text patterns to generate human‑like responses. It does not “understand” the world. That caveat was repeated often: these models are pattern machines, not reasoning entities with consciousness.
Summarized from the session: ChatGPT is a chatbot trained on patterns in text. It generates human‑like answers but doesn’t truly understand the world.
On GPT‑3 vs. GPT‑4, he let the system’s own responses speak: GPT‑4 likely trained on more and more relevant data, and it still hedges with qualifiers like “probably” and “maybe.” Most importantly, it explicitly says it has no real‑world understanding. For practitioners, the implication is clear: quality is up with GPT‑4, but the fundamental limitations remain.
Adoption at breakneck speed—1 million users in 5 days
Kreiseder highlighted a metric engineers appreciate: time from zero to one million users. While major platforms like Netflix, Twitter, Spotify, or Instagram took months or years, ChatGPT hit one million in five days—despite a registration hurdle. That velocity explains why everyone is talking about AI: perceived value is immediate, and the barrier to experimenting is low.
Where it shines today
Using AI daily, Kreiseder mapped the sweet spots that already deliver value:
- Text generation and tone control: from formal to colloquial writing, poetry to articles—“sounds good” is a genuine strength.
- Seamless language switching: mixing German and English in one thread works reliably.
- Marketing communication: Instagram posts, mailings, and polished emails. Anecdotally, the tone can be so professional that readers can tell it’s AI.
- Research‑like tasks: where Google used to be the default, ChatGPT often becomes the first stop for orientation or inspiration.
- Summarization and expansion: turning bullet points into clean prose—or distilling lengthy emails into concise bullets.
The net is pragmatic: for text and structure, models deliver “good enough” or better in many contexts—provided humans review and validate.
Hallucinations in practice—the DHH translation
Kreiseder demonstrated why blind trust fails. He asked ChatGPT to translate a David Heinemeier Hansson article (“Programming Types and Mindsets”) by pasting only the URL. The model recognized the author and title but couldn’t fetch the page. It then produced a plausible‑sounding yet incorrect text. The fix was trivial: copy the page content and paste it into the chat—translation then worked as expected.
Lesson learned: don’t assume magical web access or up‑to‑date training. These are generators, not browsers. Validation remains the user’s job.
Beyond text: audio, video, images, code, research
Kreiseder toured the wider tool landscape—while noting it changes weekly. He called out well‑known image generators like Stable Diffusion, DALL·E, and Midjourney. One thought experiment stuck: if a “professional food photo” of lasagna can be generated convincingly, how often do we still need a photoshoot for such assets?
Hands‑on testing was his method. He tried services that generate professional headshots from uploaded selfies. Observations:
- Quality improved noticeably over just half a year.
- Models tend to make you younger and “nicer,” adding suits, ties, or polos as requested.
- Telltale artifacts persist: hands and fingers are often off—still a common giveaway of generated images.
Even with odd outliers (e.g., a knight’s armor), his verdict was pragmatic: impressively usable results for many day‑to‑day purposes.
Big Tech’s full‑court press: Microsoft, Google, Amazon
Where there’s a gold rush, capital follows. Microsoft invested heavily in OpenAI, integrated the tech into Azure (OpenAI Services), and blended chat with search in Bing. A visible side effect: Linux users suddenly installing a Microsoft browser—telling for how compelling these features felt.
Google answered at I/O with a strong “AI” emphasis and introduced Bard as its counterpart. Kreiseder raised a business‑model question: Google sells ads—so when and how will sponsored answers appear in such chats?
Amazon is moving too: recent announcements suggest AI capability being woven into the store experience—another signal that all major platforms are converging on generative assistance.
Risks, misuse, and control
Kreiseder didn’t gloss over the downsides. Deepfake‑quality photos of the Pope in a stylish white jacket, Trump being arrested, a Merkel‑Obama shot with flawed fingers—images are now stunningly good yet can betray themselves via small artifacts. With a few text prompts, even casual users can conjure spectacular but fake scenes.
He also illustrated how safety rules can be sidestepped: ask for illegal pirate sites and the system refuses; reframe the request as a “do not visit” list, and concrete domains appear. For engineering teams, that’s a reminder: safety policies matter—and cleverly phrased prompts can expose enforcement gaps.
Relatedly, a rising class of browser extensions automates chat‑based workflows. One example: a plugin that navigates a provider’s support chat to cancel a subscription for you. The thought experiment writes itself: what happens when bots negotiate with bots on both sides?
From hype to practical tooling for developers
The heart of the session—for engineers—was the tooling discussion.
GitHub Copilot: pattern‑aware code completion
Kreiseder relies heavily on Copilot and described how its suggestions adapt to project‑specific patterns. He detailed a console.log example: in one codebase, Copilot proposes a labeled output in a particular order; in another, it mirrors the project’s alternative convention. This is more than autocomplete; it’s context‑sensitive pattern completion.
A small aside: some features only surface once you read the docs—like the shortcut for cycling through alternative suggestions. Productivity hinges on knowing those capabilities.
Alternatives: TabNine and CodeWhisperer
Copilot wasn’t first. TabNine has been around longer with a similar approach. Amazon’s CodeWhisperer naturally shines on AWS APIs (e.g., S3 uploads). Takeaway: there isn’t one “right” tool —teams should compare options that fit their stack.
Copilot with chat—context plus generation
GitHub previewed a “level up”: Copilot paired with chat that knows your project context and can handle requests like “write unit tests for this code” with solid quality. Kreiseder contrasted that with plain ChatGPT: it can generate tests today, but without project context. Context + generation is the meaningful jump—once broadly available.
CLI and snippet generation: GPT‑3 vs. GPT‑4
For terminal incantations, he tried both model generations. GPT‑3 produced plausible but wrong commands; GPT‑4 gave much better results. Team practice: keep a ChatGPT tab open—but verify. Plausible is not enough.
Developer search: Find instead of forum loops
Kreiseder also highlighted an AI search engine (“Find”). He tested it on his old Stack Overflow question about initializing empty strings in C#. The engine answered directly and linked to relevant sources (the SO thread, docs, Microsoft statements). The quality felt strikingly high without the ritual of forum back‑and‑forth.
His broader read: Q&A platforms face pressure. He cited trends—Stack Overflow traffic down, GitHub usage up, Copilot signups rising, and a developer survey packed with AI topics. The center of gravity is shifting toward context‑aware assistance and high‑signal answers.
Roles, titles, and AI‑washing
New job titles are popping up—“AI Prompt Engineer” being the meme of the moment. Kreiseder used a comic‑style riff to make the point: buzzwords get attention, but actual qualifications matter. He also warned about AI‑washing: tasks previously handled by a simple script are suddenly labeled “AI” without any substantive change.
And beware “instant experts.” Given that ChatGPT (at the time of the talk) had been public for about half a year, the label “AI expert” is often overstated. Serious AI work spans far beyond chat prompts.
Choosing the right tool for the job
One engineering principle came through strongly: tool choice matters. In a real task, Kreiseder needed to deploy a Lambda function alongside a CloudFormation template. ChatGPT generated a YAML that looked convincing—but was wrong. Copilot, trained across many code artifacts (including CloudFormation), suggested a correct template.
This generalizes:
- Specify the problem precisely.
- Pick tools trained on the right data domain (code vs. generic prose).
- Validate every output in the system under test.
That’s how hype becomes productive routine.
A sticky mental model: Super Mario’s fire flower
Kreiseder used a memorable metaphor: AI feels like getting the fire flower in Super Mario—you can suddenly shoot. That doesn’t mean every shot hits. Capabilities scale up; responsibility stays with you.
“To replace programmers with AI, clients will need to accurately describe what they want. We are safe.”
Conclusion: augment, don’t replace
On the session’s core question, our read aligns with Kreiseder’s lived experience: AI will not replace developers—but it will reshape and enhance the job substantially. Key lines we’re taking back to engineering teams:
- Model quality improves (GPT‑4 vs. GPT‑3), yet these are still pattern generators without world understanding.
- Adoption is unprecedented—but it doesn’t replace specs, architecture, or validation.
- IDE‑integrated tooling is already a force multiplier. Copilot‑style pattern adaptation saves time across boilerplate, tests, and repetitive tasks.
- Hallucinations are real. Verification is non‑negotiable. Project context inside the assistant will be a game changer.
- The ecosystem is re‑balancing: forums face headwinds; developer‑first search and in‑editor assistance gain ground.
- Not everything branded “AI” is AI. Not everyone touting expertise has it. Engineering skepticism is a virtue.
Practical takeaways for engineering teams
- Use generative AI for drafting and structuring text, but keep human review for tone and accuracy.
- Invest in IDE‑native assistants for code. Pattern‑aware completion accelerates the everyday tasks that consume most cycles.
- For infra templates, policies, and YAML: prefer tools trained on code artifacts—and validate within your deployment pipeline.
- For research and Q&A: consider AI search with source links to cut through noise—yet still verify against authoritative docs.
- For security and compliance: design safety policies that account for prompt reframing and “creative” bypass attempts.
- For product teams: explore project‑context assistants (e.g., chat over your codebase), but avoid silver‑bullet promises.
In the end, the message is sober and encouraging: AI is a powerful tool. In the hands of engineers with domain knowledge, craft, and judgment, it turns hype into real impact—without replacing the human.
More Tech Talks
hilarion 5 web3, why you should care
Daniel Kreiseder von hilarion 5 erklärt in seinem devjobs.at TechTalk, was der Begriff Web3 bedeutet und gibt einen Überblick darüber, welches Potential in der grundlegend neuen Denkweise von Web steckt.
Watch nowhilarion 5 NFTs are dead. Long live NFTs!
Daniel Kreiseder von hilarion 5 gibt in seinem devjobs.at TechTalk Einblicke in die Welt der NFTs und zeigt warum vielleicht mehr dahintersteckt als Hype.
Watch now