Back to blogIndustry

Anatomy of a Two-Week SIEM Migration: Why the License Overlap Math Has Quietly Changed

Zbigniew Gajuk 2026-05-12 11 min read

The two-week migration most teams still plan as a year

Yale New Haven Health moved thirty thousand endpoints onto Microsoft Sentinel in two weeks. Most teams still plan a SIEM move at that scale as a six to nine month project, sometimes longer. Industry data places the median SIEM migration at nine months, three hundred and fifty thousand dollars in tooling and consulting fees, and a thirty percent failure rate.

The two-week timeline gets attention for the speed. The cost story underneath gets less attention, and it deserves more, because the speed and the cost are the same story.

The single largest expense of a slow migration is not the migration project itself. It is the months of paying for two SIEMs at once. The dual-license window, the period between the new platform entering production and the old platform being decommissioned, is where most of the migration's economics actually happens. Project speed equals license cost. Compress the window from nine months to two weeks and the dual-license tax compresses with it, by a factor large enough to change which migrations are viable and which are not.

This post walks through what specifically changed in the architecture to make a two-week migration possible, why the dual-license window has historically been so long, and the proportional license-cost math that the CFO usually has not been shown. The mechanics matter because if the engineering team understands them, the finance team can negotiate the next contract differently, and the overall move becomes a different kind of decision.

The real cost of a slow migration is the dual-license window

The conventional migration playbook treats license overlap as an unavoidable bridge cost. The new SIEM has to be in production for several months before the old SIEM can be safely decommissioned. During that bridge, both contracts are active.

The bridge has historically lasted three to nine months for typical mid-market migrations, and one to two years for large enterprise migrations with multi-region or multi-tenant deployments. During that bridge, the customer pays full price on the old contract, because the old contract does not shrink mid-term, and ramping price on the new contract, because volume is climbing as the team progressively moves sources onto the new platform.

Take a representative mid-market customer running two hundred gigabytes per day through Splunk Cloud at typical effective pricing in the five to seven dollar per gigabyte range, with Enterprise Security and other premium add-ons. The annual Splunk run-rate sits in the low seven figures. The team decides to move to CrowdStrike Falcon LogScale, where the workload-based pricing model offers significant per-gigabyte savings at this volume tier.

A nine-month overlap on that profile costs roughly seventy-five percent of one Splunk annual run-rate, plus the ramping LogScale cost as it scales toward steady-state production volume. The exact number depends on the contract terms, but the order of magnitude is consistent across customers. The dual-license window typically equals one half to three quarters of one year of the old SIEM's run-rate.

A two-week overlap on the same profile costs approximately one twenty-fifth of one Splunk year. The proportional difference is roughly fifteen to twenty times.

For a customer with a low-seven-figure Splunk contract, the difference between a nine-month overlap and a two-week overlap is approximately seven figures of avoided spend. That number is rarely on the conventional migration's project plan, because the conventional plan treats it as fixed.

It is no longer fixed. It is a function of architecture.

Why the overlap window has historically been so long

Four reasons, each rooted in the SIEM-at-the-center model of observability that defined most of the 2010s.

First, detection content rebuild. Every detection rule, every saved search, every correlation logic in the old SIEM has to run on the new SIEM before decommission is safe. SPL becomes KQL. Regex patterns get rewritten against new field names. Lookup tables get reimported. A typical mid-market deployment has hundreds of active detection rules. A typical large enterprise deployment has thousands. Conservative engineering estimates put detection rebuild at multiple weeks per analyst, even with tooling assistance.

Second, parser ownership. Each source in the old SIEM has source-specific parsing logic baked into the platform: Splunk props.conf and transforms.conf, Sentinel data connectors, Elastic ingest pipelines, QRadar DSMs. The new SIEM needs all of that re-implemented. The old SIEM does not export it cleanly because its parser expressions are platform-specific. New parsers get written from scratch, validated against sample data, and reconciled with the detection rules that depend on field names that the new parser may emit differently.

Third, confidence by time elapsed. Even after detection rules are ported and parsers are rebuilt, the security team will not decommission the old SIEM until they have observed the new SIEM running in production long enough to trust it. That observation period historically requires the new SIEM to accumulate a meaningful operational history, including a real incident or two, before confidence reaches the threshold needed for cutover. Six months is a typical confidence-by-time window.

Fourth, agent and forwarder reconfiguration. Splunk universal forwarders, Sentinel data connectors, Beats agents, every source endpoint has been pointed at the old SIEM. Cutover requires repointing every endpoint at the new SIEM, validating the new flow, and decommissioning the old endpoint configuration. At thirty thousand endpoints, that work is non-trivial regardless of automation.

Each of these takes weeks. Stacked together, they explain the nine-month bridge. The bridge has not historically been arbitrary. It has been a function of how SIEMs integrate with their source environment.

What collapses the overlap window to two weeks

A pipeline-led architecture changes each of those four reasons, in ways that compound.

The first mechanism is dual-write at the pipeline tier, not the host. With a Cribl Stream layer absorbing collection upstream of any SIEM, the pipeline routes a copy of every event to the old SIEM and the new SIEM simultaneously, starting on day one of the migration. There is no host reconfiguration involved. The endpoints have been emitting to the pipeline all along. Adding the new SIEM is a destination configuration in Cribl, executed as a routing rule. Both SIEMs see the same events at the same time, in the same order, with the same fidelity. Confidence-by-comparison becomes possible immediately because divergence between the two platforms is measurable in real time, not deduced after months of operational history.

The second mechanism is replay for historical correlation. The day the new SIEM enters production, it has zero history. Conventionally, the new SIEM has to accumulate operational history before the security team can decommission the old one, because the only way to backtest the new SIEM's detection rules is on data that is already inside it. With a pipeline-managed open-format archive (Parquet on S3, Cribl Lake, or Azure Blob, schema-aligned to OCSF, ECS, or ASIM), historical events replay into the new SIEM the moment it stands up. The new SIEM can run six months of detection backtesting in days, against real production data, without waiting for that history to accumulate live.

The third mechanism is parser ownership upstream. When parsing happens once at the pipeline tier instead of inside the SIEM, both SIEMs receive pre-parsed, schema-aligned events. The new SIEM does not need its own parsers because the pipeline has already done the work. The detection content rebuild that used to consume engineering quarters becomes a translation problem, not a re-implementation problem, because the field names and schemas stay constant across vendors. SPL to KQL becomes a syntax exercise. Detection content portability collapses from a multi-quarter project to a multi-week project, executable by AI-assisted translation tools that are increasingly mature in 2026.

The fourth mechanism is cutover by routing percentage. The conventional cutover is a single high-risk weekend with a rollback fear. The pipeline-led cutover is a routing weight that ramps from zero percent to ten percent to fifty percent to one hundred percent across whatever calendar the security team chooses. There is effectively no cutover event. The old SIEM continues to receive a portion of traffic until the team is comfortable shutting it off. Comfort comes from data, not from time.

Stacked together, these four mechanisms do not just compress the bridge. They eliminate the structural reasons the bridge had to exist.

A worked example: Splunk to CrowdStrike LogScale

Take the same representative mid-market customer from earlier. Two hundred gigabytes per day. Low-seven-figure annual Splunk run-rate. Decision to move to CrowdStrike Falcon LogScale.

Without a pipeline layer, the conventional migration plan looks like this. Month one through two, assess scope, scope detection rebuild, design parser portability strategy. Month three through six, rebuild detection rules, port parsers, validate data flow on a subset of sources, ramp LogScale ingestion. Month seven through nine, parallel operations, observe new SIEM under real conditions, rotate analyst training, prepare cutover. Month ten, cutover and decommission. The dual-license window runs nine months. The dual-license cost is approximately three quarters of one Splunk annual run-rate, plus the LogScale ramp.

With a pipeline layer in place from day one, the same migration looks like this. Week one, stand up Cribl Stream as the collection layer if it is not already in place. Configure dual routing to Splunk and LogScale. Replay the last twelve months of historical events from the open-format archive into LogScale in parallel. Week two, run detection content translation and validation against real data on both platforms. Compare alert volumes, false positive rates, and field coverage. Decide cutover percentages. Begin ramp. Cut off Splunk at week three or four. The dual-license window runs two weeks of full overlap and a brief tail. The dual-license cost is approximately one twenty-fifth of one Splunk annual run-rate.

The license-cost difference between those two timelines, on a low-seven-figure Splunk contract, is approximately seven figures of avoided spend. The migration project itself becomes a smaller proportion of the total economics, because the license overlap was always the dominant line item, and the conventional plan never named it as such.

The numbers above are illustrative for one specific volume profile. The methodology is general. For any migration, the dual-license window approximately equals the parallel-run timeline, and the dual-license cost approximately equals the proportion of the year that the window covers, multiplied by the old SIEM's annual run-rate. Pipeline-led migrations collapse the proportion. Conventional migrations leave it intact.

The CFO question is rarely "how long should the migration take." It is "how much of next year's SIEM budget gets eaten by a contract we are trying to leave." The answer depends almost entirely on whether a pipeline layer is in place before the migration begins.

What to negotiate before signing

If a migration is on the horizon, the architecture decision and the contract decision are the same decision. Two negotiation moves change the math materially.

On the old vendor's side, push for ramp-down options. Most multi-year SIEM contracts allow for early termination clauses or volume ramp-down provisions if the customer can demonstrate a coordinated migration plan. The pipeline-led architecture is precisely the demonstrable coordinated plan that vendors require to grant ramp-down. If the contract has annual volume floors, structure the migration to cross the floor reset in the same quarter as the cutover, so the next year's commitment falls before the volume drop. If the contract has multi-year discounting tied to a renewal, time the migration so that the renewal happens after cutover, when the old SIEM's volume is a fraction of its peak. The legal text rarely changes. The execution math changes considerably.

On the new vendor's side, push back on minimum-overlap commitments. Many SIEM vendors structure their pricing to assume a multi-month parallel run, with their volume bands designed around steady ramp curves over six to twelve months. A pipeline-led migration produces a steeper ramp curve. The new SIEM hits target volume in weeks, not quarters. That changes the contract math in two ways. First, the volume band the customer commits to should reflect the post-migration steady state, not the ramp curve. Second, the new vendor often offers ramp credits or commit deferrals that align with conventional migration timelines. Push for those credits to remain valid even when the actual ramp is faster, since the architectural risk to the new vendor is lower under a pipeline-fronted migration than under a conventional one.

Both moves require the security and platform engineering team to be in the contract negotiation room. The license-overlap math is an engineering output, not a procurement input.

Where to start

The fastest way to operationalize the architecture before a migration is committed is a single-source dual-write pilot. Pick one high-volume source. Typically firewall logs, Windows event logs, or cloud audit trails. Stand up a Cribl Stream pipeline with a dual write to the existing SIEM and to object storage in an open format. Run for thirty days. Validate that the open-format archive is queryable and that detection rules can be ported against the schema-normalized output.

The result is a small production reference that confirms the dual-write architecture works in the real environment. From that reference, the next migration that becomes a candidate (renewal pressure, new SIEM evaluation, compliance-driven destination change) starts from a different baseline. The license-overlap math is no longer an abstract argument. It is a measurable property of an architecture the team has already proven.

The license-overlap math is one slice of a broader 2026 observability architecture shift. If you are working through adjacent ground, these blogs cover neighboring pieces in more detail.

On the platform side, the SIEM Migration solution page lays out the source-by-source parallel-routing model in operational detail. The Vendor Lock-In solution page explains how open-format archives decouple data from any single SIEM contract, which is the structural prerequisite for collapsing the overlap window. The Cost Optimization solution page covers the routing model across SIEM, APM, and object storage tiers in full.

The discovery call

If a migration is on your roadmap, or if a renewal cycle is approaching that will force the question, the right time to model the license overlap is before the contract is signed, not after. A thirty-minute call is usually enough to scope the single-source pilot and set up the math against your specific contract terms.

Schedule a discovery call and we will work through the dual-write design, model the proportional license-overlap savings against your current contract, and identify the negotiation moves that map cleanly onto your renewal cycle.

The migration that costs nine months of dual licensing is not the only kind of migration. It is the one still on most project plans, because the architecture under it has not yet changed.

#siem-migration#cribl#license-overlap#splunk#logscale#sentinel#dual-license#cfo#vendor-negotiation#pipeline-architecture

Want to discuss how this applies to your environment?

Schedule a discovery call and we will walk through your specific data sources, platforms, and cost challenges.

Schedule a Discovery Call