<?xml version="1.0" encoding="utf-8"?><?xml-stylesheet type="text/xsl" href="rss.xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>heydru! Blog</title>
        <link>https://heydru.com/insights</link>
        <description>heydru! Blog</description>
        <lastBuildDate>Thu, 12 Mar 2026 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <item>
            <title><![CDATA[We sponsor drupal-ai — and it's changing how enterprise Drupal teams ship]]></title>
            <link>https://heydru.com/insights/drupal-ai-engineering-ai-into-enterprise-drupal</link>
            <guid>https://heydru.com/insights/drupal-ai-engineering-ai-into-enterprise-drupal</guid>
            <pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Most Drupal teams are using AI the wrong way: pasting code into a chat window, hoping for the best, and getting output that looks right but breaks in production. That's not AI-assisted development — that's autocomplete with extra steps.]]></description>
            <content:encoded><![CDATA[<p>Most Drupal teams are using AI the wrong way: pasting code into a chat window, hoping for the best, and getting output that looks right but breaks in production. That's not AI-assisted development — that's autocomplete with extra steps.</p>
<p>At heydru!, we took a different approach. We sponsor <a href="https://eduardotelaya.com/drupal-ai/" target="_blank" rel="noopener noreferrer" class="">drupal-ai</a> — an open-source system built to make AI a reliable, deterministic part of enterprise Drupal engineering. Not a shortcut. Infrastructure.</p>
<!-- -->
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-drupal-ai-actually-is">What drupal-ai actually is<a href="https://heydru.com/insights/drupal-ai-engineering-ai-into-enterprise-drupal#what-drupal-ai-actually-is" class="hash-link" aria-label="Direct link to What drupal-ai actually is" title="Direct link to What drupal-ai actually is" translate="no">​</a></h2>
<p>drupal-ai is not a plugin, a chatbot, or a code generator you run once and forget. It's an engineering system built specifically for professional Drupal teams: production-ready, opinionated, and designed for scale.</p>
<p>The system has four core components:</p>
<p><strong>Skills</strong> — 30+ reusable modules encoding domain knowledge about Drupal: how to structure a migration, how to write a proper service, how to handle entity API patterns correctly. Skills give the AI a foundation in Drupal-specific knowledge rather than relying on generic training data.</p>
<p><strong>Agents</strong> — 9 specialized task-specific agents for concrete development scenarios: PR reviews, module generation, debugging, migrations, architecture scoping. Each agent knows its context and its constraints.</p>
<p><strong>Rules</strong> — 8 enforced rule files that define how code should be written — coding standards, architectural patterns, naming conventions. These are constraints, not suggestions. The AI can't ignore them.</p>
<p><strong>Workflow hooks</strong> — Automated integrations with the Drupal development stack: DDEV, Drush, PHPCS, Drupal's own tooling. AI fits into the existing workflow rather than replacing it.</p>
<p>The result: AI that produces code you can actually merge. Not code that looks plausible, but code that follows your team's standards, fits your architecture, and behaves correctly at runtime.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="why-we-sponsor-it">Why we sponsor it<a href="https://heydru.com/insights/drupal-ai-engineering-ai-into-enterprise-drupal#why-we-sponsor-it" class="hash-link" aria-label="Direct link to Why we sponsor it" title="Direct link to Why we sponsor it" translate="no">​</a></h2>
<p>We sponsor drupal-ai because it solves a problem we lived with for years: AI tooling that's powerful in theory but inconsistent in practice.</p>
<p>The inconsistency problem in Drupal is especially acute. Drupal has deep, layered APIs — the entity system, the plugin system, the cache system, configuration management. Generic AI models have reasonable Drupal knowledge but fail on specifics: deprecated APIs in D11, subtle differences between entity query methods, the correct way to structure a custom migration. The output looks right until it doesn't.</p>
<p>drupal-ai encodes the correct patterns. It codifies 14+ years of Drupal expertise into a system that can be run, tested, and improved. When we use it internally, we're not hoping the AI knows Drupal — we're giving it a structured map of how we build Drupal.</p>
<p>That changes the risk profile entirely. It's the difference between using AI as a convenience and using AI as infrastructure.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-changes-for-enterprise-teams">What changes for enterprise teams<a href="https://heydru.com/insights/drupal-ai-engineering-ai-into-enterprise-drupal#what-changes-for-enterprise-teams" class="hash-link" aria-label="Direct link to What changes for enterprise teams" title="Direct link to What changes for enterprise teams" translate="no">​</a></h2>
<p>For teams operating at scale — large codebases, strict standards, multiple developers — drupal-ai closes the gap between AI's potential and its real-world reliability.</p>
<p><strong>Onboarding accelerates.</strong> A new developer with drupal-ai has access to the team's accumulated knowledge from day one. Skills encode what would otherwise take months of code review and mentorship to absorb.</p>
<p><strong>Code review focuses on architecture.</strong> When AI output already conforms to your standards and patterns, reviewers stop spending cycles on style and boilerplate. Reviews become substantive.</p>
<p><strong>Migrations become more systematic.</strong> The migration agent has specific knowledge of Drupal's Migrate API — source plugins, process plugins, migration dependencies. It doesn't just generate code that looks like a migration; it generates code that follows the patterns that actually work.</p>
<p><strong>Knowledge stops being siloed.</strong> On most Drupal teams, expertise lives in individual developers' heads. When they leave, it leaves. drupal-ai makes expertise a shared, version-controlled asset.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="this-is-how-heydru-delivers">This is how heydru! delivers<a href="https://heydru.com/insights/drupal-ai-engineering-ai-into-enterprise-drupal#this-is-how-heydru-delivers" class="hash-link" aria-label="Direct link to This is how heydru! delivers" title="Direct link to This is how heydru! delivers" translate="no">​</a></h2>
<p>When clients work with us, they're not just getting engineers who know Drupal. They're getting a team that has systematized Drupal expertise into infrastructure that makes every engagement more consistent and every delivery more predictable.</p>
<p>drupal-ai is the tooling that makes that possible at a level of rigor that generic AI assistance can't match. We built it, we use it on every project, and we continue to develop it.</p>
<p>If you're running a serious Drupal platform and want to work with the team operating at this level, <a href="https://meetings.hubspot.com/heydru/conversemos" target="_blank" rel="noopener noreferrer" class="">let's find out if we're a fit</a>.</p>
<hr>
<p><em>drupal-ai is open source and actively maintained. Explore the full system at <a href="https://eduardotelaya.com/drupal-ai/" target="_blank" rel="noopener noreferrer" class="">eduardotelaya.com/drupal-ai</a>.</em></p>]]></content:encoded>
            <category>AI Engineering</category>
        </item>
        <item>
            <title><![CDATA[Designing scalable Drupal backend architectures]]></title>
            <link>https://heydru.com/insights/designing-scalable-drupal-backend-architectures</link>
            <guid>https://heydru.com/insights/designing-scalable-drupal-backend-architectures</guid>
            <pubDate>Thu, 05 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Most Drupal architecture problems aren't Drupal problems. They're decisions made early in a project — content modeling choices, caching strategies, integration patterns — that compound into performance and maintainability issues as the platform grows. Getting the architecture right means making those decisions deliberately, before they're made by accident.]]></description>
            <content:encoded><![CDATA[<p>Most Drupal architecture problems aren't Drupal problems. They're decisions made early in a project — content modeling choices, caching strategies, integration patterns — that compound into performance and maintainability issues as the platform grows. Getting the architecture right means making those decisions deliberately, before they're made by accident.</p>
<p>Here's how we think about scalable Drupal backend design.</p>
<!-- -->
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="start-with-the-access-patterns-not-the-content-types">Start with the access patterns, not the content types<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#start-with-the-access-patterns-not-the-content-types" class="hash-link" aria-label="Direct link to Start with the access patterns, not the content types" title="Direct link to Start with the access patterns, not the content types" translate="no">​</a></h2>
<p>The most common architecture mistake is starting with content modeling and figuring out delivery later. The right order is the reverse: understand how the content will be consumed, then model around that.</p>
<p>Questions that should be answered before creating the first content type:</p>
<ul>
<li class="">Who are the consumers? Editorial team only, external APIs, a decoupled frontend, third-party integrations?</li>
<li class="">What are the read vs. write ratios? A site that publishes 10 times a day and serves 10 million page views needs a very different cache strategy than one publishing 1,000 times a day to 10,000 users.</li>
<li class="">What are the latency requirements? Sub-100ms for public pages? Real-time for editorial previews?</li>
<li class="">Is content relationship complexity high? Deeply nested entity references kill performance in ways that are hard to fix after the fact.</li>
</ul>
<p>The answers to these questions shape almost every subsequent architectural decision.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="decoupled-vs-traditional-making-the-right-call">Decoupled vs. traditional: making the right call<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#decoupled-vs-traditional-making-the-right-call" class="hash-link" aria-label="Direct link to Decoupled vs. traditional: making the right call" title="Direct link to Decoupled vs. traditional: making the right call" translate="no">​</a></h2>
<p>Decoupled Drupal (Drupal as a headless CMS with a separate frontend) is not inherently better than traditional Drupal. It's a tradeoff, and the wrong call in either direction creates problems.</p>
<p><strong>Traditional Drupal is the right choice when:</strong></p>
<ul>
<li class="">The team has strong Drupal frontend expertise (Twig, theme layer)</li>
<li class="">SEO and time-to-first-byte are critical and you want the simplest possible stack</li>
<li class="">The editorial team needs in-context preview and layout tools (Layout Builder, etc.)</li>
<li class="">The integration surface is primarily between Drupal and a few internal systems</li>
</ul>
<p><strong>Decoupled is the right choice when:</strong></p>
<ul>
<li class="">The frontend is genuinely complex and requires a modern JavaScript framework</li>
<li class="">Multiple consumers need the same content (web, mobile app, third-party systems)</li>
<li class="">The frontend team is stronger in React/Vue/Next than in Twig</li>
<li class="">Content delivery needs to be independent of Drupal's publishing pipeline</li>
</ul>
<p>The mistake we see most often is organizations going decoupled because it sounds modern, without a frontend team capable of owning the frontend platform. The result is two systems to maintain and none of the benefits.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="content-modeling-that-scales">Content modeling that scales<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#content-modeling-that-scales" class="hash-link" aria-label="Direct link to Content modeling that scales" title="Direct link to Content modeling that scales" translate="no">​</a></h2>
<p>A few principles that consistently improve long-term maintainability:</p>
<p><strong>Model for the editor, not just the output.</strong> Fields that make sense as a data structure aren't always fields that make sense to an editorial team. If editors are regularly misusing a field — putting body content in a summary field, for example — that's a modeling failure.</p>
<p><strong>Avoid deeply nested entity references.</strong> A node referencing a paragraph referencing a media entity referencing a file sounds fine on paper. At 50,000 nodes with complex views, it becomes a join nightmare. Flatten where you can. Use denormalized data structures when read performance matters more than write simplicity.</p>
<p><strong>Use configuration management from day one.</strong> All content types, fields, views, and display modes should live in version-controlled YAML. A content type that was created by clicking in the UI and never exported is a liability — it can't be deployed reproducibly, and it can't be code-reviewed.</p>
<p><strong>Plan for multilingual from the start.</strong> Adding multilingual support to a site that wasn't designed for it is expensive. Even if you're launching in one language, if there's any chance of adding more, build the translation infrastructure in from the beginning.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="caching-architecture">Caching architecture<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#caching-architecture" class="hash-link" aria-label="Direct link to Caching architecture" title="Direct link to Caching architecture" translate="no">​</a></h2>
<p>Caching is where Drupal backend architecture either pays off or falls apart under load.</p>
<p><strong>The caching layers we work with, from outside in:</strong></p>
<ol>
<li class="">
<p><strong>CDN (CloudFront, Fastly, etc.)</strong> — Handles the majority of anonymous traffic. Drupal's cache tags enable precise invalidation: when a node is updated, only the CDN cache entries that contain that node are purged, not the entire cache. The <code>purge</code> module ecosystem handles this well.</p>
</li>
<li class="">
<p><strong>Reverse proxy (Varnish)</strong> — Useful for infrastructure setups where a CDN isn't in place, or as an additional layer. Drupal generates <code>Surrogate-Control</code> headers that Varnish respects.</p>
</li>
<li class="">
<p><strong>Drupal's internal page cache</strong> — Serves fully cached pages for anonymous users without bootstrapping the full Drupal stack. Critical for high-traffic public sites.</p>
</li>
<li class="">
<p><strong>Drupal's dynamic page cache</strong> — Caches pages for authenticated users with user-specific elements excluded via cache contexts. Often overlooked, but significant for sites with logged-in users.</p>
</li>
<li class="">
<p><strong>Render cache</strong> — Individual render elements are cached based on cache tags, contexts, and max-age. Deep understanding of the render cache is the difference between a Drupal site that scales and one that doesn't.</p>
</li>
</ol>
<p>The critical concept throughout is <strong>cache tags</strong>. Every entity in Drupal has tags. When that entity is updated, all cache entries tagged with it are invalidated automatically. Lean into this system rather than fighting it.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="api-design">API design<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#api-design" class="hash-link" aria-label="Direct link to API design" title="Direct link to API design" translate="no">​</a></h2>
<p>For sites exposing content via API — whether to a decoupled frontend or external consumers — JSON<!-- -->:API<!-- --> (built into Drupal core) covers most use cases well. It handles filtering, sorting, includes, sparse fieldsets, and pagination out of the box.</p>
<p>Custom REST endpoints are appropriate when:</p>
<ul>
<li class="">The response shape needs to differ significantly from Drupal's entity structure</li>
<li class="">You need to aggregate data from multiple entity types into a single response</li>
<li class="">Performance requirements make the overhead of JSON<!-- -->:API<!-- -->'s generic approach unacceptable</li>
</ul>
<p>When building custom endpoints, keep them thin. Business logic should live in services, not in controllers or plugins. This makes the API layer testable and replaceable.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="database-patterns">Database patterns<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#database-patterns" class="hash-link" aria-label="Direct link to Database patterns" title="Direct link to Database patterns" translate="no">​</a></h2>
<p>A few things that consistently show up as performance bottlenecks:</p>
<p><strong>Views with no exposed indexes.</strong> Drupal's Views module generates SQL queries that can be devastating on large datasets if the underlying fields aren't indexed. Always check the query being generated for complex views. Add database indexes where needed — Views doesn't do this automatically.</p>
<p><strong>Entity queries that load full entities unnecessarily.</strong> <code>\Drupal::entityTypeManager()-&gt;getStorage('node')-&gt;loadMultiple($ids)</code> loads complete entities. If you only need a field value, use an entity query to get IDs and then query the field table directly. The difference at scale is significant.</p>
<p><strong>Missing composite indexes on custom tables.</strong> If you're writing custom tables (for logging, for external data sync, for anything), design the indexes around the queries you'll run, not around the data you're storing.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="a-note-on-infrastructure">A note on infrastructure<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#a-note-on-infrastructure" class="hash-link" aria-label="Direct link to A note on infrastructure" title="Direct link to A note on infrastructure" translate="no">​</a></h2>
<p>Good Drupal backend architecture is only as good as the infrastructure running it. A few things that matter more than teams typically realize:</p>
<ul>
<li class=""><strong>PHP-FPM tuning</strong> — The default <code>pm.max_children</code> value is almost never right for production. Profile under realistic load.</li>
<li class=""><strong>OPcache</strong> — Should be enabled and sized appropriately. A warm OPcache makes a meaningful difference in response times.</li>
<li class=""><strong>Database read replicas</strong> — For read-heavy sites, routing read queries to replicas reduces load on the primary significantly. Drupal's database abstraction layer supports this natively.</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-compounding-effect">The compounding effect<a href="https://heydru.com/insights/designing-scalable-drupal-backend-architectures#the-compounding-effect" class="hash-link" aria-label="Direct link to The compounding effect" title="Direct link to The compounding effect" translate="no">​</a></h2>
<p>Architecture decisions compound. A good content model makes migrations easier. A clean caching strategy makes infrastructure simpler. Proper use of configuration management makes deployments safer. These things build on each other, and the gap between a well-architected Drupal platform and a poorly-architected one grows with time.</p>
<p>The investment in getting architecture right at the start is almost always returned in reduced maintenance cost within the first year.</p>]]></content:encoded>
            <category>Architecture</category>
        </item>
        <item>
            <title><![CDATA[Migrating Drupal 7 to 11 without downtime]]></title>
            <link>https://heydru.com/insights/migrating-drupal-7-to-11-without-downtime</link>
            <guid>https://heydru.com/insights/migrating-drupal-7-to-11-without-downtime</guid>
            <pubDate>Wed, 14 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Drupal 7 reached end of life in January 2025. Most organizations still running it aren't behind because they lack resources — they're behind because every migration approach they've considered feels too risky. The platform is doing real work. It can't go offline for a week while a team rebuilds it.]]></description>
            <content:encoded><![CDATA[<p>Drupal 7 reached end of life in January 2025. Most organizations still running it aren't behind because they lack resources — they're behind because every migration approach they've considered feels too risky. The platform is doing real work. It can't go offline for a week while a team rebuilds it.</p>
<p>Here's how we approach D7 → D11 migrations at heydru: treating them as parallel systems, not upgrades.</p>
<!-- -->
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="why-this-isnt-an-upgrade">Why this isn't an upgrade<a href="https://heydru.com/insights/migrating-drupal-7-to-11-without-downtime#why-this-isnt-an-upgrade" class="hash-link" aria-label="Direct link to Why this isn't an upgrade" title="Direct link to Why this isn't an upgrade" translate="no">​</a></h2>
<p>The instinct is to treat a major version jump as an upgrade — export the database, run some scripts, fix the errors. That works between minor Drupal versions. It does not work between Drupal 7 and Drupal 11.</p>
<p>The underlying architecture is fundamentally different. Drupal 7 uses a hook-based system with procedural PHP. Drupal 8 introduced an object-oriented kernel built on Symfony components, and that foundation has carried through to D11. The content structure, the routing system, the configuration management, the theming layer — all of it changed.</p>
<p>When we take on a D7 migration, the first thing we tell the client is: <strong>you are rebuilding your platform, not upgrading it.</strong> That reframe matters because it changes how you plan, how you staff, and how you manage risk.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-parallel-run-strategy">The parallel-run strategy<a href="https://heydru.com/insights/migrating-drupal-7-to-11-without-downtime#the-parallel-run-strategy" class="hash-link" aria-label="Direct link to The parallel-run strategy" title="Direct link to The parallel-run strategy" translate="no">​</a></h2>
<p>The safest migration approach we've found is running D7 and D11 in parallel until D11 is ready to take over fully.</p>
<p><strong>Phase 1 — Audit and content modeling</strong></p>
<p>Before writing a single line of new code, we do a complete audit of the D7 site: every content type, every field, every view, every module, every integration. We document what's actually being used versus what was installed and forgotten. Legacy Drupal sites routinely have 80+ contributed modules enabled, and in practice 30% of them are doing nothing useful.</p>
<p>This audit becomes the migration spec. We don't migrate everything — we migrate what matters. That distinction alone has saved clients weeks of work.</p>
<p><strong>Phase 2 — Build D11 in isolation</strong></p>
<p>We stand up the D11 platform in a separate environment, completely decoupled from D7. Same database server, different database. We rebuild content types in D11 using configuration management from the start — no clicking in the UI, everything in YAML. This means the D11 build is fully reproducible from day one.</p>
<p>Custom module code gets rewritten as Drupal 8+ plugins and services. There's no shortcut here, but the new code is substantially cleaner and more testable than most D7 module code.</p>
<p><strong>Phase 3 — Content migration with Drupal's Migrate API</strong></p>
<p>The Migrate API, stable since Drupal 8.1, is the right tool for this. It provides a structured pipeline: source plugins read data from D7, process plugins transform it, destination plugins write it to D11.</p>
<div class="language-php codeBlockContainer_Ckt0 theme-code-block" style="--prism-background-color:hsl(220, 13%, 18%);--prism-color:hsl(220, 14%, 71%)"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-php codeBlock_bY9V thin-scrollbar" style="background-color:hsl(220, 13%, 18%);color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><code class="codeBlockLines_e6Vv"><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">// Example: simple node migration source</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">source:</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">  plugin: d7_node</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">  node_type: article</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">process:</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">  title: title</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">  body: body/0/value</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">  field_category:</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">    plugin: migration_lookup</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">    migration: d7_taxonomy_term_tags</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">    source: field_tags</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">destination:</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">  plugin: entity:node</span><br></div><div class="token-line" style="color:hsl(220, 14%, 71%);text-shadow:0 1px rgba(0, 0, 0, 0.3)"><span class="token plain">  default_bundle: article</span><br></div></code></pre></div></div>
<p>We run migrations iteratively, not once. The first run will expose field mapping issues, missing taxonomies, broken file references. We fix those, wipe the D11 content tables, and run again. By the fifth or sixth iteration the migration runs cleanly in under an hour for most sites.</p>
<p><strong>Phase 4 — Continuous sync</strong></p>
<p>This is the part most teams skip, and it's the most important one. Once D11 content is in good shape, we set up a sync job that runs the migration daily. Every new node, every content update, every taxonomy change in D7 gets migrated to D11 automatically.</p>
<p>This keeps D11 current. It means the cutover isn't a one-time data dump — it's just turning off the sync and flipping DNS.</p>
<p><strong>Phase 5 — The cutover</strong></p>
<p>When D11 has been running in parallel for long enough that the team is confident in it, the cutover is straightforward:</p>
<ol>
<li class="">Put D7 in maintenance mode</li>
<li class="">Run a final full migration to catch any content updated since the last sync</li>
<li class="">Update DNS to point to D11</li>
<li class="">Monitor for 48 hours</li>
<li class="">Keep D7 accessible internally for one month as a fallback</li>
</ol>
<p>Total downtime: the time it takes to flip DNS — minutes, not hours, not days.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="common-failure-modes">Common failure modes<a href="https://heydru.com/insights/migrating-drupal-7-to-11-without-downtime#common-failure-modes" class="hash-link" aria-label="Direct link to Common failure modes" title="Direct link to Common failure modes" translate="no">​</a></h2>
<p><strong>Trying to do it all at once.</strong> Teams that try to migrate content, rebuild features, upgrade infrastructure, and launch all in one go almost always blow past their timeline. Phase it.</p>
<p><strong>Underestimating custom module rewrites.</strong> A 500-line D7 module doesn't become a 500-line D11 module. The concepts are different. Budget more time than you think you need.</p>
<p><strong>Skipping the audit.</strong> The client thinks they know what their site does. They are never fully right. The audit always surfaces something — usually a niche feature that two people use and three people have forgotten exists, with no documentation.</p>
<p><strong>Not running migrations repeatedly.</strong> Running the migration once early and not touching it again means you'll hit a wall of data issues during the real cutover. Run it weekly at minimum throughout the project.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-a-realistic-timeline-looks-like">What a realistic timeline looks like<a href="https://heydru.com/insights/migrating-drupal-7-to-11-without-downtime#what-a-realistic-timeline-looks-like" class="hash-link" aria-label="Direct link to What a realistic timeline looks like" title="Direct link to What a realistic timeline looks like" translate="no">​</a></h2>
<p>For a mid-size Drupal 7 site (15–30 content types, 50,000–200,000 nodes, 5–10 custom modules), we typically budget:</p>
<ul>
<li class="">Audit and scoping: 2–3 weeks</li>
<li class="">D11 build: 8–16 weeks (varies heavily by custom functionality)</li>
<li class="">Migration pipeline development and iteration: 4–6 weeks, running in parallel with build</li>
<li class="">User acceptance testing: 3–4 weeks</li>
<li class="">Cutover and stabilization: 1–2 weeks</li>
</ul>
<p>Total: 4–6 months for a well-scoped project. Teams that rush this to 2–3 months usually end up doing a second migration a year later.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-upside">The upside<a href="https://heydru.com/insights/migrating-drupal-7-to-11-without-downtime#the-upside" class="hash-link" aria-label="Direct link to The upside" title="Direct link to The upside" translate="no">​</a></h2>
<p>D11 is meaningfully better than D7 in ways that aren't obvious until you're in it. Configuration management makes deployments predictable. The service container makes testing realistic. JSON<!-- -->:API<!-- --> and GraphQL come standard. The performance baseline is higher.</p>
<p>The migration is real work. But the platform you end up with is substantially more maintainable, more secure, and more capable than what you're leaving behind.</p>
<p>If you're still on D7 and trying to figure out where to start, the answer is the audit. Everything else follows from there.</p>]]></content:encoded>
            <category>Migrations</category>
        </item>
    </channel>
</rss>