Workplace Image apichap

Negar Layegh, Machine Learning Engineer bei APICHAP

Description

Machine Learning Engineer bei APICHAP Negar Layegh gibt im Interview Einblicke in ihren Werdegang bis hin zur aktuellen Arbeit, welche Herausforderungen im Machine Learning es gibt und gibt Tipps für Neueinsteiger.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In "Negar Layegh, Machine Learning Engineer bei APICHAP," Negar Layegh recounts switching from electrical to computer engineering after a C programming class in Iran, developing an interest in robotics, interning in Tel Aviv, returning for a master’s, and then spending three years in machine learning/machine vision. Now the first employee at APICHAP in her first role after university, she builds services that use large language models (GPT) to generate APIs, focusing on prompt engineering today with plans to fine‑tune models, noting LLMs excel at text but struggle with logic. Her takeaway for developers: foundations in math and algorithms help, but real progress comes from hands‑on coding across many examples and crafting precise prompts to guide models to better results.

From Iran’s Entrance Exam to First Employee at apichap: Negar Layegh’s Pragmatic Path into ML and Prompt Engineering

Our take on the session “Negar Layegh, Machine Learning Engineer bei APICHAP”

  • Title: Negar Layegh, Machine Learning Engineer bei APICHAP
  • Speaker: Negar Layegh
  • Company: apichap

We listened closely to a devstory that begins with Iran’s competitive university entrance exam, pivots during an introductory C programming class, and leads to building services with Large Language Models as the first employee at a startup. What stood out is how grounded this journey is: making decisions fast when clarity strikes, seeking motivation from real results, and growing expertise through practice rather than perfection.

A decisive early switch: Electrical Engineering to Computer Engineering—triggered by C

Negar opens with a common starting point in Iran: a high-stakes entrance exam determines where you study. With strong marks, she chose Electrical Engineering—until that first programming class in C changed the trajectory.

“I was like, okay, I want to do that.”

Then came the decisive move:

“I didn't even finish one semester as electrical engineering and I changed to computer engineering.”

That’s not an impulsive leap; it’s the clarity you get when you experience impact first-hand. Writing code, seeing something run, observing a result—those moments sharpen career choices more than any abstract plan. Negar’s early switch is the first recurring theme in her story: act when you discover where your work produces visible outcomes.

Bachelor years: Classical machine learning, practice, and a pull toward robotics

During her bachelor’s, Negar worked on classical machine learning—some, not a lot, as she notes. Robotics drew her in as well, a field where perception and control yield immediate feedback. What matters is that she encountered ML as a practical craft, not purely as theory. It was already about building, testing, and iterating.

The pattern is clear: tangible results were the fuel. Courses matter, but hands-on work makes the learning stick.

Tel Aviv as a proving ground: Internship, master’s, and three years in machine vision

After her bachelor’s, Negar pursued application over abstraction. She applied for an internship in Tel Aviv, spent a summer there, returned for her master’s—and then stayed:

“I kind of stayed there and worked in machine learning, machine vision for three years.”

Three years in machine learning and machine vision is a rigorous stretch. Even without project details, you can infer the daily realities: noisy data, shifting inputs, timing constraints, and the discipline to build systems that perform despite all that. For us, it explains the throughline to her current work with LLMs: engineering reliable services around imperfect, probabilistic models.

First real job, first employee: Ownership from day one at apichap

Negar brings us to the present:

“I'm working as a machine learning engineer at APICHAP. I'm a first employee of the startup. It's also my first real job after university.”

That’s a distinctive starting point. As the first employee, you don’t just ship features—you help shape the processes, quality bars, and culture that will persist. It is engineering and company-building in one.

The product idea: Generating APIs with models rather than hand-coding

Negar frames the core goal crisply:

“That's the whole idea of the startup, to write the APIs or to generate the APIs using machine learning models instead of coding it, one person codes it.”

APIs are formal—but they’re also language: endpoints, contracts, examples, error cases, and descriptions. This is where LLMs have become useful. When guided well, they organize patterns, follow formats, and compose coherent artifacts.

Large Language Models today: Strong at text, limited at logic—so the prompt is the lever

Negar’s description of the current landscape is both enthusiastic and sober:

“The machine learning models that are being used right now are large language models, which are a big hit right now. And the powerful part of them is that they have a window remembering all the stuff that they said so they can generate meaningful content.”

That “window”—a working context—enables LLMs to stay coherent across instructions, examples, and constraints. But she also calls out a critical limitation:

“They are not very smart, so they are still a bit dumb when it comes to logical stuff, but they're very good generating text and based on the prompts you would give them.”

This is the design premise for LLM-based engineering: exploit the models’ strengths (text and pattern synthesis) while compensating for their weaknesses (multi-step logic and brittle reasoning), and do so through careful prompt construction.

Day-to-day: Building services with GPT models, starting with what exists, aiming to fine-tune later

Negar describes her current focus succinctly:

“As a machine learning engineer, right now what I do exactly is prompt engineering to use the existing models… And right now I also build services using the GPT models, but in the future we'll be also fine tuning the models to fit better the problem that we currently have instead of being a general to get a better result.”

Behind that is a mature ML engineering playbook:

  1. Start with existing models and see how far they can take you—validate value before investing in specialization.
  2. Treat prompt engineering as the first and often strongest lever—spell out goals, steps, examples, and constraints.
  3. Consider fine-tuning when the problem class is clear—then move from general-purpose to task-shaped models.

Her comment on how prompting mitigates logic limitations gets to the heart of it:

“With the correct prompt and explaining to them, you have to do this and that, then it will be a better result…”

It’s not a trick; it’s disciplined instruction design. Models respond best when you explicitly tell them what to do, in what order, with what structure.

Learning by doing: Games, classical ML tasks, and the motivation of visible results

Negar emphasizes how she learned through coding challenges and classical ML tasks early on. The key driver was seeing what was happening—contrasted with her brief time in Electrical Engineering, where results didn’t show up “very fast.” That insight matters for any learning path in tech:

  • Fast feedback beats slow abstraction.
  • Motivation compounds when effort turns into visible progress.
  • Short feedback loops make complexity manageable.

Her bottom line is as practical as it gets:

“It's mostly by practicing… of course, reading some basics and mathematics would also help in the long term and algorithms, but at the end of the day, it's mostly programming and working on different examples… it's more like experience that you can gain from doing different stuff.”

It’s not a dismissal of theory—it’s a sequencing: fundamentals provide durability, but capability is forged through hands-on work across varied examples.

Practical takeaways for developers

From Negar’s account, we distilled actionable guidance—no hype, just what tends to work:

  • Start with existing models: Build services around GPT and other LLMs before you sink time into deep customization. You’ll learn faster where the value is.
  • Respect logic limits: Don’t ask LLMs to carry long chains of reasoning on their own. Break tasks into explicit steps.
  • Treat the prompt as code: Write prompts with clear goals, constraints, examples, and step order—just like production code deserves.
  • Iterate on the prompt first: Many quality gains come from improved instructions, not just downstream code changes.
  • Manage context thoughtfully: Use the model’s window effectively; keep relevant specs, decisions, and examples in the prompt.
  • Prove value before fine-tuning: Plan fine-tuning once the problem class is stable—avoid premature specialization.
  • Learn through projects: Games, small experiments, classical ML tasks—anything that yields fast results builds intuition.
  • Keep fundamentals in your toolkit: Mathematics and algorithms pay off over time, especially when grounded in practice.

Prompt engineering patterns implied by Negar’s approach

Negar’s remarks suggest a handful of prompt patterns that consistently help:

  • Goal-first instruction: Say “you have to do this and that”—explicit goals and steps beat vague asks.
  • Stepwise decomposition: Break tasks into ordered operations to reduce chances of logical drift.
  • Example-driven guidance: LLMs respond to in-context examples—use them to anchor format and edge cases.
  • Constraint emphasis: If structure matters (e.g., API specs), write constraints into the prompt and repeat them.
  • Format discipline: Specify output format to reduce noise and post-processing.

None of this is flashy—and that’s precisely why it works in practice.

Why “first employee” matters: Ownership, focus, and pragmatic delivery

Being the first employee means broad responsibility from day one. It means:

  • making decisions with limited certainty,
  • shipping services that deliver value now,
  • and learning from real usage instead of hypothetical use cases.

In this setting, “use existing models first, then fine-tune” isn’t a shortcut; it’s disciplined engineering. The goal is reliable services and a short path to feedback.

Quotes worth remembering

A few lines that stuck with us:

“I was like, okay, I want to do that.”

— on the moment programming clicked

“I didn't even finish one semester as electrical engineering and I changed to computer engineering.”

— on decisive course correction when clarity appears

“I'm a first employee of the startup. It's also my first real job after university.”

— on ownership and growth in an early-stage company

“They are not very smart… but they're very good generating text…”

— on a realistic view of LLMs

“With the correct prompt… then it will be a better result.”

— on the prompt as the central lever

“It's mostly by practicing…”

— on experience as the main teacher

Who this devstory speaks to

  • Students and career changers considering a move into computer engineering—this story shows the value of switching early when something resonates.
  • ML newcomers curious why so much real-world LLM work starts with prompt engineering—this is a frontline account.
  • Startup builders who know that time to value matters: start with general models and services; specialize once the target is clear.

Concrete next steps inspired by Negar Layegh

  • Pick a text-forward task (e.g., deriving API specs from examples) and build a minimal service around a GPT model.
  • Iterate on the prompt: draft, test, inspect gaps, add constraints and examples, measure improvement.
  • Curate representative examples and counter-examples—and include them in the prompt.
  • Track prompt changes like code changes—what improved what?
  • Once the problem class stabilizes, evaluate whether fine-tuning exceeds the returns from further prompt refinement.

These steps aren’t complex—but they demand discipline. That’s the kind of pragmatism reflected in Negar’s approach to building.

What resonated most

Negar’s tone is plainspoken and credible. No illusions about LLMs; she calls out strengths and weaknesses clearly. No inflated claims about her path; just a tight thread: discover, decide, practice, apply.

Her route—from that first C class, through robotics and machine vision, into LLM-based services—maps well to modern ML careers. You don’t start with the “perfect” model. You start with a task that creates value, learn where feedback is fast, and master tools by using them repeatedly on real examples.

Conclusion: A modern ML career without myth—rooted in practice

Negar Layegh’s story shows how grounded and energized a path into ML can be. From Iran’s admissions grind to a decisive degree switch. From coding through games and classical ML to a summer internship in Tel Aviv, then three years in machine learning and machine vision. And now: first employee at apichap, building services with GPT models, using prompt engineering daily, with fine-tuning on the horizon.

Our takeaway: visible results beat abstract plans. LLMs produce strong text under clear instruction—but they’re not logic engines. Practice, examples, and varied experience decouple progress from perfection. For developers, that’s a reassuring message: good work emerges from method, not myth. That’s exactly what “Negar Layegh, Machine Learning Engineer bei APICHAP” brought into focus.

More Tech Lead Stories