FREQUENTIS
CCR
Description
Richard Draxelmayr von Frequentis spricht in seinem devjobs.at TechTalk darüber, welche Vorteile eine gute Review-Kultur im Devteam hat.
By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.
Video Summary
In CCR, Richard Draxelmayr of FREQUENTIS argues that Collaborate Constructive Review boosts team quality through shared ownership, two-way knowledge transfer, a unified team philosophy, an activity stream of changes, and more mistakes caught, while acknowledging time costs, peer pressure on creativity, and uneven review load. He outlines a practical setup: strong tooling with diffs, comments, tasks and traceability (Bitbucket on-prem and Crucible), converting non-text artifacts into text (Markdown/PlantUML), and heavy automation - tests and builds on every commit, quality gates that block merges, and CI deployments to a tester. He closes with review etiquette you can apply immediately: use I-statements, offer concrete suggestions, leave room for discussion, mind tone, and consider spikes or a "charisma sprint" to foster exploration.
CCR for Engineers: Building Better Teams with Collaborate Constructive Review – Takeaways from “CCR” by Richard Draxelmayr (FREQUENTIS)
Why this session matters for engineering teams
In “CCR,” Richard Draxelmayr (FREQUENTIS) talks about “Collaborate Constructive Review,” not the rock band. As a software and system engineer at FREQUENTIS since 2015—working in safety-critical communications—he lays out what makes reviews effective, how to structure the process with the right tools and automation, and how etiquette turns feedback into progress.
Draxelmayr’s remit spans multiple layers: building a software-centric mindset in a traditionally software-lean environment, adding automated quality checks wherever possible, and shipping across a broad range—web UIs, automated load testing, customer and internal trainings, and a Spring Boot-based REST backend. His main line of work: an automated deployment tool using Ansible. That variety gives his take on reviews a grounded credibility: it’s not theory, it’s the distilled practice of someone who lives with the consequences.
From our DevJobs.at editorial seat, we captured the essentials: why reviews are worth it, the pillars of a robust process, the tooling and automation patterns that actually help, and the etiquette that keeps the signal high and the noise low.
Why reviews are awesome: five durable reasons
Richard opens with a clear claim: reviews are awesome. His reasons are pragmatic and cumulative:
- Ownership transfer: Responsibility moves from the individual to the team. The contribution remains yours, but accountability becomes collective. Reviews help teams go from “my code, my risk” to “our code, our standard.”
- Know-how transfer in all directions: Reviews are learning environments—for juniors and seniors alike. Whether it’s a clever use of a JavaScript generator or remembering Python’s built-in deep copy functions, you learn by seeing patterns and getting nudged toward better ones.
- Forge and preserve team philosophy: Different opinions are a fact of life. Reviews are where those viewpoints converge into a coherent, enforceable philosophy—decisions made in the past remain relevant and can be referenced in future discussions.
- An activity stream for the codebase: You can’t track every change, but reviews surface crucial ones. That beats returning to a component months later only to find it unrecognizable.
- More eyes catch more issues: Reviews won’t find everything, but they will find the class of problems that reviews can uncover. Those would otherwise slip through.
These points matter doubly in safety-critical environments. But the logic applies anywhere: reviews increase quality, spread knowledge, and give teams visibility into how their system evolves.
Counterarguments exist—address them head-on
Richard names three common objections and offers practical ways to respond:
- Time cost: Reviews trade focus time for assurance. In his context—safety-critical systems—that trade pays back. And even outside that, a second pair of eyes on gnarly problems is valuable.
- Peer pressure stifles creativity: Strong, experienced opinions can dampen exploration. Counteract with intentional space for discovery—spikes to investigate new technologies, or a “charisma sprint” where people can “do whatever the heck they like.”
- Review load imbalance: Spending weeks reviewing while your own work languishes is demoralizing. Identify such patterns early and address the imbalance, or else the process blunts both people and reviews.
The meta-point is clear: reviews aren’t free, but the risks are manageable when you make them explicit and adjust the system.
Three pillars of a smooth review process
Richard’s foundation is simple and robust: tooling, automation, and etiquette. In his words: tooling is your bread and butter; automation keeps attention on what matters; etiquette is how messages land.
1) Tooling: diffs, comments, tasks, navigation, history
Trying to run multi-round reviews on Word documents quickly collapses—“after the second round nobody has any idea what’s going on,” and traceability is shaky. For code, lean on review tools with the right primitives.
What to look for:
- A good diff view: See exactly what changed.
- Comments: Ask questions, provide context, clarify intent.
- Tasks: Make requested changes explicit and trackable, ideally at the line level.
- Quick navigation: Move through large changes efficiently.
- Traceability into the past: Preserve discussions and decisions for months or years.
At FREQUENTIS, the team uses Bitbucket on-premise, Crucible, and a proprietary tool. The brand is secondary; the core is whether the tool supports these capabilities without friction.
Turning non-text deliverables into text
Not everything we review is code. Richard’s practical fix: convert intermediate steps to text whenever possible. At FREQUENTIS, the contract documentation (API and glossary) lives in Markdown and PlantUML and is ultimately rendered into HTML for company-wide consumption. This allows the team to leverage the full arsenal of code review tooling while still delivering the final format the organization needs.
2) Automation: let machines do machine work
“If you can automate it, especially in reviews, do it.” That’s the spirit. Automation removes busywork and keeps reviewers focused on high-level concerns.
Concretely:
- Enforce style and conventions automatically: Whitespace vs. tabs, newline at end of file, copyright headers in the right place—machines do this perfectly and consistently.
- Automated quality tests: Unit tests and baseline quality checks belong in the pipeline, not in a reviewer’s checklist.
- Builds on every commit: At FREQUENTIS, pushes trigger builds for each commit. If something’s wrong, the system tells you.
- Quality gates on mainline branches: Changes that don’t meet criteria don’t get in. As Richard puts it, “There is no arguing with the machine.” Rules can be changed when needs evolve, but the gate isn’t negotiable per change.
- Full deployment to a tester: For the automated deployment tool he mentioned, every commit on the mainline developer branch triggers a full deployment to a tester—an automation milestone he’s proud of.
The result is consistency and speed: reviewers stop policing whitespace and start reviewing design, readability, risk, and behavior.
3) Etiquette: keep information high and emotion low
Tooling and automation set the stage, etiquette determines the outcome. Richard highlights several communication patterns:
- Phrase feedback from your perspective: Avoid “This won’t work.” Prefer “I thought this through, and I think it does not work as we intend.” That phrasing invites discussion rather than asserting an objective truth.
- Provide suggestions: Don’t just say something is wrong and leave the author to guess your preferred fix. Be explicit about what should change and offer a concrete suggestion.
- Leave room for discussion: Objective-truth claims narrow the space. Keep it conversational.
- Written communication reads colder: Intonation and facial expression are missing, so word choice matters. He jokes through a reading exercise to show how much tone shifts meaning. His plea: “keep your tone changes small.”
Etiquette isn’t fluff; it’s the difference between a review that helps and a review that hurts.
Applying CCR: a practical rollout path
Based on Richard’s talk, here’s a grounded sequence teams can adopt without adding process theater:
1) Choose a review tool that nails the basics
- Clear diffs, inline comments, tasks, fast navigation, preserved history.
- On-prem or cloud is a secondary choice; usability for the team is primary.
2) Set a review policy
- What needs review? Who reviews what? When is a review complete? What qualifies as a blocker?
- Capture rationale so decisions remain referenceable.
3) Automate style and license checks
- Whitespace/tabs, end-of-file newline, copyright headers.
- Remove repetitive nits from human reviewers.
4) Automate tests and builds per commit
- Every change triggers a build with visible results.
- Test failures are a CI concern first, not a reviewer’s chore.
5) Enforce quality gates on mainline
- Failing changes do not merge; “no arguing with the machine.”
- Rules evolve when the team evolves—but as rules, not exceptions.
6) Convert non-text artifacts into text intermediates
- Maintain docs in Markdown/PlantUML (or similar) to gain full review capability.
- Render the final format (e.g., HTML) afterward.
7) Balance review workload
- Make review activity visible; watch for stretches where someone reviews nonstop.
- Rebalance early to avoid burnout and process fatigue.
8) Create space for exploration
- Use spikes or schedule a “charisma sprint” where people can pursue ideas freely.
9) Make etiquette explicit
- Share phrasing templates: “I think …”, “Could we …”, “My suggestion would be …”.
- Encourage reviewers to pair critique with concrete suggestions.
Memorable lines and practical anchors from the session
A few phrases and patterns from Richard’s talk make great internal slogans:
- “Reviews are awesome.” A reminder of the positive sum when done well.
- “More eyes just see more deficiencies.” A simple truth about team-based quality.
- “There is no arguing with the machine.” Quality gates defuse subjective debates over baselines.
- “Keep your tone changes small.” Written feedback needs careful phrasing.
- “Intermediate steps in text.” Markdown and PlantUML make even non-code reviewable.
- “Builds on every commit.” Immediate feedback accelerates learning and iteration.
Etiquette in action: a micro style guide for reviewers
Richard’s examples translate into a compact style guide you can adopt today:
- Don’t: “This won’t work.”
- Do: “I thought this through, and I think it does not work as we intend.”
- Don’t: “This is just wrong.”
- Do: “I’m concerned X will fail in scenario Y. Suggestion: Z.”
- Don’t: “Change this.”
- Do: “Could we … here? Alternatively, … might work because …”
- Always: Offer suggestions. The author should understand the expected change and the rationale.
This approach keeps the dialog open and the focus on outcomes.
Common friction points—and how CCR addresses them
- It takes too long: Narrow the human scope. Let automation handle formatting and basic quality; reviewers address logic, readability, risks, and design.
- We repeat ourselves: Codify recurring nits as automated checks. Humans shouldn’t enforce whitespace.
- Opinions escalate: Anchor your team philosophy in documented decisions you can reference in future reviews.
- Innovation stalls: Intentionally schedule spikes or a “charisma sprint” to give exploration oxygen.
- Review burnout: Track and rebalance review load so nobody gets stuck in perpetual review mode.
These patterns are embedded in Richard’s talk and require no speculative add-ons.
Conclusion: CCR is a mindset—tooling and automation make it scale
“Reviews are awesome.” After “CCR” by Richard Draxelmayr (FREQUENTIS), the statement reads as operating principle rather than slogan: share responsibility, circulate knowledge, forge a team philosophy, maintain a live activity stream, and catch the class of errors reviews can catch. To make that stick, put automation in charge of standards, choose tools that make diffs, comments, tasks, and history first-class, and practice etiquette that keeps the message clear.
Our takeaway: teams that treat tooling, automation, and etiquette as a single system turn reviews from obligation into advantage. And teams that—like the FREQUENTIS setup—run builds on every commit, enforce quality gates on mainline, and even auto-deploy to a tester for critical components, embed quality where it matters most: in every change.
The next step is straightforward: pick the right tool, automate what machines do best, and phrase feedback so information outweighs emotion. That’s how Collaborate Constructive Review becomes more than an acronym—it becomes the way your team works.
More Tech Talks
FREQUENTIS Safety-critical Software Solutions
Djamel Sahnine von Frequentis zeigt in seinem devjobs.at TechTalk mit welchen Herausforderungen sich die Software Entwicklung im Unternehmen auseinandersetzt.
Watch nowFREQUENTIS Drones Swim
Thomas Lutz von Frequentis zeigt in seinem devjobs.at TechTalk welche Rahmenbedingungen geschaffen werden, um in Zukunft mit einer hohen Zahl an Drohnen umgehen zu können.
Watch now
More Tech Lead Stories
FREQUENTIS Günter Graf, VP New Business Development von Frequentis
Der VP New Business Development von Frequentis Günter Graf umreißt im Interview die Inhalte des Unternehmens, was der Fokus bei New Business Development ist und erläutert auch, warum gutes Domain Knowledge den Unterschied macht.
Watch nowFREQUENTIS Karl Wannenmacher, Head of Public Safety Products von Frequentis
Der Head of Public Safety Products von Frequentis Karl Wannenmacher erzählt im Interview über die Organisation des Devteams der Abteilung Public Safety, wie dort das Recruiting geschieht und welche technischen Challenges es gibt.
Watch now