Skip to content
The invisible work that holds everything together

Your systems talk to each other — or your employees talk for them.

We connect ERP, CRM, industry software, databases and modern cloud services into a landscape that works as a whole. Without manual handovers, without spreadsheets between departments.

Most companies do not have a software problem — they have an integration problem. Their systems are individually fine, but they do not exchange data. People take over the bridge work: export here, import there, compare, correct. That costs time, produces errors and slows decisions. We build the bridges: clean, documented, monitored connections between your systems that run reliably — and keep your data true in exactly one place.

Core promises
01

No more manual data juggling

The endless exports from one system, imports into another, the comparing in Excel — all of that disappears. Data flows to where it is needed, without a human intermediate step.

02

One source of truth

Customer data in the CRM, invoices in the ERP, campaigns in the marketing tool — all stays consistent. Whichever system you open, the answer is the same. Decisions no longer rest on the question "which system is actually right right now?".

03

Decisions in real time instead of over the weekend

When data flows immediately, reports are immediately current, dashboards immediately meaningful, alerts immediately timely. You see the state of your business in the present, not from a spreadsheet as of last Friday.

04

Scales without new hires

Many companies respond to growing volume with new clerks moving data back and forth. With integrations the volume grows, the bridges carry it — and you hire people for more valuable work.

Every detail, unpacked
08 areas
01

Integration today: no longer the nightmare of the past

Whoever lived through an integration project fifteen years ago often carries bad memories. Today's reality has little in common with those projects.

What is it?

Integration projects had a bad reputation for a long time: expensive, slow, fragile. The reason was heavyweight enterprise service buses that required their own experts and wobbled with every system update. Today the world is different: modern systems have standardized APIs, lightweight integration platforms make orchestration easier, and modern software-engineering approaches ensure that integrations no longer collapse with every update. An integration project today is a normal project — no black art.

What does it look like?

A company spent two months and a six-figure sum ten years ago to connect CRM and ERP — via an enterprise service bus that caused regular headaches after. The same company has now rebuilt the integration completely: in three weeks, without an enterprise service bus, with clearly documented direct API connections, continuous monitoring and a codebase any developer can grasp in two hours. The operating cost is a tenth of what the old solution consumed. That is the real evolution of the integration world in these years.

Why does it matter?

Many companies hesitate on integration projects because of past experiences. That is economically expensive: you accept the manual bridges that cost time every week, even though the alternative today is much easier and cheaper than before. Integration in 2026 is no longer a luxury project — it is a baseline service that almost always amortizes quickly.

How we build it

We work with lightweight, modern integration approaches: clear API contracts instead of bus infrastructure, direct or lightly orchestrated connections depending on complexity, monitoring on every route, error handling as a building principle. We do not invest in large infrastructure that gets expensive later — we invest in clean, traceable connections that any developer team can understand and adapt.

Typical use cases

  • Companies with a grown software landscape without clean data exchange
  • Mid-sized businesses after a system switch or acquisition
  • Companies with manual bridges that cost time and produce errors
  • Digitalization initiatives focused on process efficiency
02

API-first: the only strategy that lasts

Whoever builds integrations on screen automation pays monthly for repairs. Whoever builds them on APIs pays once for the build.

What is it?

API-first means: we connect systems through their documented interfaces (Application Programming Interfaces), not through scripts that click their way through user interfaces. APIs are structured, stable, versioned gateways to a system's data and functions. They are made for other software to use. User interfaces are made for humans — and change with every design update.

What does it look like?

An industrial company ran for two years a solution that used screen automation to move data between an industry system and a reporting tool. It broke every two to three months because the industry system changed its interface — and each repair cost a day of downtime. We replaced the solution with an API-based integration: the industry system's API was stable, documented and versioned. Since the switch there have been three updates to the industry system — not a single disruption to the integration.

Why does it matter?

Screen automation is a last resort, not a building principle. It is only justified when a system truly has no API — and even then the decision should be made consciously against the drawbacks. API-based solutions are more robust, faster, cheaper to operate and substantially less maintenance-intensive. Anyone still betting heavily on screen automation today is building debt that costs interest every month.

How we build it

We check at the start of every project which APIs are available in your landscape. Many systems have more of them than their users know — modern versions of ERP, CRM and industry solutions almost always ship with substantial API sets. Where APIs are missing, we work pragmatically: read-only database access, structured file exports, or — as a last resort — carefully built UI bridges with clear maintenance plans. The priority is always: clean before fast, durable before hacky.

Typical use cases

  • Modern cloud systems (all common CRMs, ERPs, marketing tools)
  • Enterprise software with published REST or GraphQL APIs
  • SaaS integrations (payments, shipping, communication)
  • In-house systems with documented interfaces
03

Data synchronization: the eternal fight for the one truth

When two systems hold the same information, you must decide which system is right. That decision must be made early — and above all consistently.

What is it?

In every company there is data that lives in multiple systems: customer data in CRM and ERP, product data in ERP and shop, employee data in HR and access control. Synchronization answers three questions for each kind of data: who is the leading source? In which direction does a change flow? What happens on conflict? Without clear answers, data wastelands emerge in which nobody knows which state is true. With clear answers, clean landscapes emerge.

What does it look like?

A services firm had a common scenario: contact data was maintained in the CRM but also manually corrected in the ERP when an invoice was due. The result: no record was correct — every place held its own truth. We defined a leading source per data type (contacts: CRM, invoice data: ERP, bank info: ERP), set up one-way synchronization (CRM → ERP for contacts, no reverse direction) and built a clear conflict rule (ERP changes to contacts are written back to the CRM and fall into a review step). Within three months the data was consolidated, and it stays that way — because the rule is clear and the machine enforces it.

Why does it matter?

Inconsistent data is not just an operational annoyance — it costs deals when customers get wrong information, and it creates legal risk in data protection and accounting. At the same time it is invisible as long as no one looks: on the surface everything seems fine; problems surface only at edge cases that then hurt. Clean synchronization is unspectacular but economically one of the biggest values of integration projects.

How we build it

We start with a data map: which information lives where, who maintains it, which system is leading. This map is drawn and documented together with you — it is often the first time such an overview exists at all. On that basis we define synchronization rules per data type and build the technical implementation. Conflicts are never silently resolved but made visible and escalated to a responsible human until a rule kicks in.

Typical use cases

  • Customer data between CRM, ERP and marketing tools
  • Product data between ERP, shop and marketplace
  • Employee data between HR, IT and access control
  • Financial data between accounting, reporting and banking
04

Real-time or batch — when which is right

Not every integration has to transfer instantly. The reflexive wish for real-time is often more expensive than sensible.

What is it?

Fundamentally there are two patterns. Real-time integration: every change is transferred immediately, often via push messages between systems. Batch integration: changes are bundled in time windows (hourly, daily, weekly) and transferred. Real-time is more expensive in build and operation, batch is simpler but with latency. The right decision depends on the use case, not on wishful thinking.

What does it look like?

Two integrations at the same company. First: new customers from the shop should appear in the CRM so sales can reach out immediately. Real-time makes sense, because every minute of delay directly costs conversion. Second: product sales figures should go into the reporting tool daily. Real-time would be overkill — the reporting tool is opened at 8am anyway, not at 2am. A nightly batch job is more robust, easier to operate and cheaper. The decision was deliberately different per integration, not uniformly "real-time always".

Why does it matter?

Teams almost always overestimate the need for real-time. That leads to oversized solutions that are expensive to operate and complicated to debug in exceptional cases. Batch integrations have been underestimated for a long time — yet they cover most realistic use cases better: simpler, more robust, cheaper, more maintainable. The rule: real-time only where real-time demonstrably brings business value. Everywhere else: batch.

How we build it

We decide per integration: what happens if this information arrives an hour later? If the answer is "nothing", batch is the right way. If the answer is "direct revenue loss" or "annoyed customers", we talk real-time. Between the extremes there are many middle grounds — mini-batches every five or fifteen minutes are often the perfect compromise.

Typical use cases

  • Lead handover (often real-time)
  • Invoice synchronization (often nightly batch)
  • Inventory levels (often mini-batch every few minutes)
  • Reporting and analytics (mostly daily batch)
  • Payment confirmations (real-time, due to customer expectations)
05

When systems miss each other — error handling as duty

Distributed systems fail differently from standalone ones. Whoever does not account for that from the start builds solutions that break spectacularly.

What is it?

In a standalone system, errors are mostly clearly localizable: a button does not work, a function throws an error. In distributed systems a new error class emerges: network problems, timeouts, two systems with different states, messages sent but not received, actions half-performed. Without deliberate error handling every integration becomes a time bomb.

What does it look like?

A scenario we meet often: an integration transfers orders to a service provider overnight. One night the service provider is unreachable for ten minutes. A poorly built integration would mark the 200 affected orders as lost — and in the morning there would be a support alarm. Our integration detects the problem, retries three times with growing intervals, succeeds on the third attempt, and not a single order is lost. On persistent failure: a precise message to the responsible team with exactly the transactions needing manual review — not a sweeping "everything broken".

Why does it matter?

Bad error handling is the main reason integrations have the reputation of being fragile. Good error handling makes integrations boring in the good sense: they run, what happens is traceable, and the truly rare cases needing human attention are precisely reported. We often tell customers that "boring" is the compliment we wish for — it means nobody has to talk about the integration anymore.

How we build it

Our error architecture has multiple layers. Automatic retries on temporary problems (network, short outages), with growing intervals. Safe pauses on persistent problems, so no data damage occurs. Precise escalation messages with full context instead of generic alarms. Resumability, so work continues exactly where the error occurred — not from the top. Everything logged, everything traceable, individually tuned per integration.

Typical use cases

  • Finance integrations with zero-tolerance for data loss
  • Production-adjacent connections with real-time requirements
  • Communication integrations with guaranteed response times
  • Multi-step workflows with dependencies between systems
06

Legacy systems: the truth about old software

The most interesting integration projects are the ones with systems older than their users. And they are possible — when you know how.

What is it?

Legacy systems — older software that has been running productively for years — are often the backbone of the company and at the same time the biggest integration pain. They have little or no modern APIs, their documentation is spotty, their operators are cautious. Replacing them is usually not a realistic option — too expensive, too risky. Embracing them and embedding them into modern landscapes is the pragmatic path we almost always recommend.

What does it look like?

A company has used a specialized system for more than twenty years that runs on its own database and has no modern APIs. Replacement came up again and again — and was postponed again and again because the cost would be incalculable. We built an integration bridge: a lean intermediate layer reads in read-only mode directly from the specialized system's database, structures the data, and exposes it as a modern API for all other systems. Write access flows controlled through the existing import mechanisms. Result: the specialized system stays unchanged and stable, and everything else in the company can access it with modern means. Replacement became no longer urgent — and that is fine.

Why does it matter?

Many companies are caught between two extremes: the pressure to replace legacy (expensive, risky, slow) and the frustration of not being able to connect them to modern software (expensive, slowing). The third path — respecting legacy systems and embedding them with intelligent integration bridges — is often overlooked but in most cases economically superior. We have clients whose thirty-year-old system is today better integrated than some cloud tools.

How we build it

We work with every available access route: official APIs (if present), read-only database access (with stable data models and clear approval), structured file exports, protocol-based interfaces for older industrial systems, and — as a last resort — carefully built UI automations with a documented maintenance strategy. Always under the principle: the legacy system is not changed, only queried or fed in a controlled way.

Typical use cases

  • Mid-sized businesses with historically grown specialized systems
  • Industry with old machine controls
  • Healthcare and public sector with long-running systems
  • Companies after acquisitions with a heterogeneous software landscape
07

Security at the system boundaries

Every integration is a door. The more doors, the more clearly the keys must be managed.

What is it?

Every integration means one system gives another access — to data, to functions, often to sensitive information. Security at these points answers three questions: who is allowed to do what (authentication and roles), which data actually flows (minimization, encryption, logging), what happens on abuse or outage (detection, containment, recovery). This sounds technical but at heart is exactly what your IT department and your data protection officer will advise.

What does it look like?

An integration between shop and ERP transfers customer data and line items. We check: does the shop really need access to the full customer record in the ERP (date of birth, internal customer notes)? No — a reduced view without those fields is enough. We build accordingly a dedicated, minimal API access for the shop, with only the data it really needs. This access runs over separate authentication, is logged and would be blocked immediately on anomalies. Data minimization as a security principle saves discussions with data protection and reduces attack surface at the same time.

Why does it matter?

Integrations have long been the weakness in security architectures. Too broad access, overly generous keys, unencrypted transmissions — classics, all avoidable. In the DACH region on top: GDPR explicitly requires checking which data actually flows. Whoever cannot document that has explaining to do at the next audit. Security at integration boundaries is therefore both a technical obligation and a regulatory precondition.

How we build it

We work by default with dedicated access accounts per integration (no shared "admin logins"), minimal permissions (only what the integration really needs), encrypted transmission between all systems, central management of API keys with a regular rotation plan, logging of every access action and automatic monitoring for anomalies. For regulated industries we add specific compliance controls.

Typical use cases

  • Integrations with personal data (GDPR-mandatory)
  • Finance integrations with access to payment information
  • Healthcare integrations with specially protected data
  • Integrations with external partners (suppliers, service providers)
08

Integrations that still live in five years

The greatest art in integration projects is not the build. It is the fact that in five years they run just as reliably as today.

What is it?

Integrations age differently from normal software. Each of the connected systems evolves: new versions, changed APIs, new security requirements. If the integration does not grow along, it will eventually break. Maintainability is therefore not a luxury but an economic obligation. A perfectly built integration nobody understands anymore after two years is worse than a simple solution anyone can read.

What does it look like?

We have several clients who have been running integrations we originally built for more than five years. All run stably, all have been adjusted a few times in that period (new system versions, new fields, new compliance requirements). Adjustments were each handled within a few days — because the solutions were documented, structured and built according to modern software-engineering principles. Companies that see integrations as "build once, never touch" unintentionally build debt that becomes expensive later.

Why does it matter?

Many vendors build integrations that look wonderful at first launch but require a complete redevelopment in three years, because nobody understands the code and nobody can safely touch changes. That is wasted money. Good integration architectures age gracefully: they can be adjusted, reviewed, handed over to other developers. The difference only becomes apparent over years — but then very clearly.

How we build it

We build integrations with the same standards we apply to product software: versioned code in Git, automated tests per integration, clear documentation of what flows where and why, easy changeability on field or format changes, logging with analysis capabilities, rollback ability on problems. And we invest in handover documentation that any reasonably experienced developer team can understand and take over in short time. Vendor lock-in is far from us — you should have the freedom to work with whomever you choose.

Typical use cases

  • Long-term integrations in productive use
  • Critical integrations where outages are expensive
  • Companies with a growing internal developer team
  • Projects with compliance requirements on traceability
Real-world example

When integrations become invisible.

A mid-sized retail company had before our collaboration three full-time roles moving data between shop, ERP, shipping provider and accounting — exports, imports, corrections, reconciliations. Over six months we gradually built the most important bridges, not in one big push, but systematically one after another. Today all of these data flows run without human involvement. The three employees still work at the company, but on more demanding tasks — customer care, supplier communication, process improvement. They themselves say it has been the best professional change in years. That is the best kind of integration: one that becomes invisible to the company because it just runs, and in which the people gain more time for what they enjoy doing.

Frequently asked

What we often get asked about System Integrations.

Do we have to replace our existing systems to build integrations?

Almost never. Most of our projects consist precisely of respecting and connecting your existing systems — not replacing them. Even very old systems can be integrated well today, with different techniques depending on available interfaces. Replacement projects we only recommend when the legacy system is at end of life or should be replaced for reasons outside the integration topic.

How long does a typical integration project take?

A single, clearly scoped integration between two systems is usually productive in three to eight weeks. More complex projects with multiple systems, data migrations and process changes take three to six months. We almost always recommend splitting large integration efforts into smaller, clearly scoped projects — that reduces risk and delivers value faster.

What does an integration cost?

A credible number we share only after a conversation in which we know the systems, data types and requirements. As orientation: a clean two-system integration with proper error handling and monitoring is typically a few-weeks project and amortizes within a few months through eliminated manual work. Complex landscapes with many systems and high volume are correspondingly larger.

How do we deal with systems that have no APIs?

There are several paths. Read-only access directly to the database is often possible and stable if the schema is documented or stable. File-based transfers (structured exports and imports) are a proven path for older systems. Screen automation is the last resort and is only chosen when no better option exists — with a clear maintenance plan. We decide pragmatically per system and document the choice.

What happens when an integration breaks?

Good integrations are built to handle typical outages themselves — automatic retries on network problems, safe pauses on persistent outages, precise escalation messages to the responsible people. Your team is notified in time and with clear context when manual attention is needed — not only when customers complain. In addition we build dashboards that continuously show the state of all integrations.

Can we continue working on the integrations ourselves later?

Yes, we explicitly recommend this. All integrations are versioned, documented and delivered with tests. If you have or are building an internal developer team, it can take over the integrations and extend them. We deliberately build no vendor lock-in and gladly accompany handovers when you choose to take the step.

Do integrations replace jobs?

In practice very rarely. What typically happens: the manual data work between systems disappears, and the same employees work on more demanding tasks — customer care, supplier communication, process improvement. Integration projects usually make teams more satisfied because the disliked inter-system juggling falls away.

Talk to D — at night, in the morning, right now.

D knows this topic in detail. Tell him your situation — he'll take over.

Start a conversation