Narratives in the AI stack do not start at the top. They start at the bottom and travel up.

The lag is 60 to 90 days. That is where the intelligence value lives. This report treats the AI infrastructure stack not as a single topic but as a system of seven interconnected narrative environments, each with its own dominant framing, its own key voices, and its own propagation dynamics. Energy and grid constraints at the base. Chip supply and architecture one floor up. Data center capacity, hyperscaler strategy, the training-to-inference shift, and enterprise and consumer demand at the top.

Analysis by Shadow Research Team·April 2026·Edition 1

Stack layers
7
Discrete narrative environments
Propagation lag
60–90d
Observed, infrastructure to application
Accelerating layers
3
Energy, chips, training-to-inference
In-transit signals
2
Active propagation events right now

Five findings

What the stack is telling us right now.

The intelligence value lives at the bottom.

The application layer is in a receiving state.

  1. 01

    Infrastructure leads application by 60 to 90 days.

    The Q3 2024 energy constraint narrative produced the Q1 2026 capex discipline framing.

    Application teams are receiving delayed signals from below, not reacting in real time.

  2. 02

    Custom silicon is the in-transit signal that matters.

    Meta, Amazon, and Google all moved off NVIDIA dependency in the past 60 days.

    Reshapes enterprise procurement and inference cost framing by Q3 2026.

  3. 03

    Google's split chips confirm the architectural bifurcation.

    The stack has bifurcated into two distinct optimization regimes.

    60 to 90 days before this becomes the frame product evaluation runs through.

  4. 04

    The enterprise ROI narrative is 18-month-old infrastructure economics.

    Reliable, cheaper inference made measurable enterprise outcomes possible.

    Enterprise comms teams never tracked the story that produced their own.

  5. 05

    Two narratives are in active transit as of April 2026.

    Geographic data center diversification arrives at the hyperscaler layer in 45 to 60 days.

    The training-to-inference shift produces an inference-first enterprise narrative in 60 to 75.

Why it matters

Comms teams across AI keep optimizing for the wrong narrative environment. They treat “AI” as a single topic. It is a system of seven layers, each with its own press cycle, voices, and framing conventions. The frame your reporters will use next quarter is already visible two layers below you as of April 2026.

Seven layers. Seven narrative environments.

Each layer operates on its own media cycle, with its own dominant voices and framing conventions. Narratives do not originate simultaneously across layers. They emerge from constraints in lower layers and propagate upward. The map below orders them top to bottom: where they sit in the stack, what state each is in, and how much narrative momentum each is generating right now.

The AI Infrastructure Stack · Narrative Map

Seven layers. Seven narrative environments.
Each receives from the one below it.

7Top

Consumer Demand

Developing

AI product surface fragmenting into specialized verticals; no dominant platform narrative established.

receives from below ↓

55Velocity
6Top

Enterprise Demand

Active

ROI-first framing arrives; agentic AI for workforce becomes the primary deployment narrative.

receives from below ↓

63Velocity
5Inflection

Training-to-Inference Shift

Accelerating

Architectural bifurcation confirmed; edge inference emerging as the next buildout wave.

receives from below ↓

82Velocity
4Middle

Hyperscaler Strategy & Capex

Active

Capex spend race collides with investor ROI demands; geographic diversification in progress.

receives from below ↓

68Velocity
3Middle

Data Center Capacity

Active

Geographic expansion and resource scrutiny (power, water) reshaping where and how capacity is built.

receives from below ↓

74Velocity
2Foundation

Chip Supply & Architecture

Accelerating

Custom silicon fragmenting GPU monopoly; training vs. inference chip architectures bifurcating.

receives from below ↓

87Velocity
1Foundation

Energy & Power Grid

Accelerating

AI power demand testing national grid capacity; nuclear and natural gas displacing renewables-first narrative.

91Velocity

Layers ordered top to bottom (highest to lowest in the stack) · Status as of April 2026 · Velocity scores relative to 90-day baseline.

Three of the seven are accelerating, and all three sit in the infrastructure half of the stack. The application layers are in a receiving state, absorbing narratives that were already well-established in infrastructure coverage 60 to 90 days ago.

So what: the unit of analysis is the layer, not the topic.

Infrastructure layers are generating the most momentum.

Three of the seven layers are accelerating right now, and all three sit in the infrastructure half of the stack: Energy & Power Grid (Layer 1), Chip Supply & Architecture (Layer 2), and the Training-to-Inference Shift (Layer 5). The application layers — Enterprise and Consumer — are the slowest movers in the set. We measure that with a velocity score, defined below.

Velocity score · how it is computed

Three signals, one composite, indexed to a 90-day baseline.

Each layer's velocity score is a weighted blend of three independent measurements taken from the past 60 days of earned coverage, then indexed against the same layer's trailing 90-day baseline. The result is a 0–100 number where 100 is the highest level the layer has reached in that window.

Volume growth · 40%
Quarter-over-quarter change in narrative-tagged article count for the layer, normalized against the layer's own 90-day mean. Captures how fast coverage is intensifying.
Source-tier concentration · 30%
Share of layer coverage appearing in Tier 1 outlets (WSJ, FT, Bloomberg, NYT, The Information, Reuters, major trade press). Captures how seriously the editorial center is treating the layer.
Thematic concentration · 30%
Share of layer coverage captured by the single dominant narrative within the layer (the top narrative's share of layer-tagged articles). Captures how unified the framing is.
Index formula
score = 100 × (0.4 · volume_z + 0.3 · tier1_share + 0.3 · top_narrative_share), clamped to 0–100. Z-scores are computed against the layer's 90-day trailing distribution.
Why it's relative, not absolute
Layers have different natural coverage volumes — Layer 4 (Hyperscaler) sees more raw articles than Layer 2 (Chips), but that doesn't make its narrative more active. Indexing to each layer's own baseline makes scores comparable across the stack.

Reading a score

Accelerating75+

A dominant frame is consolidating fast in Tier 1 coverage.

Active55–75

Coverage is sustained, framing is contested or in flux.

Developing<55

Coverage exists, but no narrative owns the layer yet.

By that measure: Energy (91), Chips (87), and the Training-to-Inference shift (82) lead. Hyperscaler (68), Data Center (74), and Enterprise (63) sit in the active band. Consumer (55) is the only layer in the developing band — high coverage, but no consolidating story.

Narrative velocity score by stack layer · 90-day trailing baseline

Infrastructure layers are generating the most narrative momentum.

0255075100Velocity scoreL7 · Consumer DemandDeveloping55L6 · Enterprise DemandActive63L5 · Training-to-Inference ShiftAccelerating82L4 · Hyperscaler Strategy & CapexActive68L3 · Data Center CapacityActive74L2 · Chip Supply & ArchitectureAccelerating87L1 · Energy & Power GridAccelerating91

Source: Perigon News Intelligence · Shadow narrative velocity analysis · as of April 30, 2026.

By the time a narrative reaches the application layer, it has been running in infrastructure media for 60 to 90 days. The velocity ranking is the propagation thesis in static form: momentum originates below and dissipates upward.

So what: if you operate above Layer 4, your incoming narratives are already visible in coverage you are probably not reading.

The cycles rise and fall layer by layer.

Velocity over the past seven quarters, plotted layer by layer. Energy peaked early, dipped, and is re-accelerating. The chip cycle inflected sharply in the past two quarters. Training-to- inference broke open after DeepSeek in Q1 2025. Enterprise and consumer demand have only just begun to climb.

Velocity score by layer · Q3 2024 – Q1 2026 · 7-quarter trajectory

The cycle peaks arrive at lower layers first, then migrate up.

Q3 '24Q4 '24Q1 '25Q2 '25Q3 '25Q4 '25Q1 '26Velocity over timeL7 · Consumer DemandDeveloping55L6 · Enterprise DemandActive63L5 · Training-to-Inference ShiftAccelerating82L4 · Hyperscaler Strategy & CapexActive68L3 · Data Center CapacityActive74L2 · Chip Supply & ArchitectureAccelerating87L1 · Energy & Power GridAccelerating91
AcceleratingActiveDevelopingCycle peak

Source: Perigon News Intelligence · Shadow narrative velocity analysis. Each row uses a shared 0–100 scale.

Read the chart bottom-up: the cycle apex marker (the larger dot) arrives at lower layers first and migrates upward. That is the propagation thesis as a moving picture. Each peak you see at the bottom is roughly the next peak coming at the top, 60 to 90 days out.

High-volume layers and high-velocity layers are not the same.

Velocity is one half of the picture. Volume is the other. The panels below plot quarterly article volume per layer, ordered by Q1 2026 size. Layers can end the window at the same headline number with completely different trajectories underneath.

Quarterly article volume by layer · Q3 2024 – Q1 2026 · thousands of articles

Volume tells a different story than velocity.

One panel per layer, ordered by Q1 2026 volume. Each panel uses an independent y-scale so the shape reads on its own — the K value and growth multiple show absolute size.

L7 · CONSUMER DEMAND132K· +12% in 7 qtrsQ3 '24Q1 '26
L2 · CHIP SUPPLY & ARCHITECTURE132K· 2.5× in 7 qtrsQ3 '24Q1 '26
L4 · HYPERSCALER STRATEGY & CAPEX122K· +49% in 7 qtrsQ3 '24Q1 '26
L6 · ENTERPRISE DEMAND102K· 2.4× in 7 qtrsQ3 '24Q1 '26
L3 · DATA CENTER CAPACITY96K· 3.7× in 7 qtrsQ3 '24Q1 '26
L1 · ENERGY & POWER GRID96K· +33% in 7 qtrsQ3 '24Q1 '26
L5 · TRAINING-TO-INFERENCE SHIFT94K· 5.9× in 7 qtrsQ3 '24Q1 '26

Source: Perigon News Intelligence · Shadow analysis. Total across the stack, Q1 '26: 774K articles.

Consumer Demand (L7) and Chip Supply (L2) both end Q1 2026 at 132K articles. They got there in completely different ways. Consumer crawled — 118K to 132K, +12% across seven quarters, with a dip in the middle. Chips climbed steeply, 52K to 132K, a 2.5× run with most of the gain in the past three quarters. Endpoint says they are the same size; trajectory says they are opposite stories. The fastest absolute climbers are Training-to-Inference (5.9× from a small base), Data Center Capacity (3.7×), and Chip Supply (2.5×) — the same three layers leading on velocity.

So what: a layer's endpoint hides what it took to get there. Trajectory is the volume signal worth tracking.

The dominant narrative in each layer, and what it sends upward.

For each layer: the current dominant narrative, its key voices, the active signals driving it, and the propagation destination and estimated lag. Read bottom-up — that is the direction the narratives are traveling.

Layer 1 · FoundationAccelerating

Energy & Power Grid

AI power demand is testing grid capacity at a national scale, and the renewables-first narrative is giving way to nuclear and natural gas as the near-term solution frame.

91Velocity

Active signals

  • AI data center power demand compared to South Dakota's entire grid output
  • Meta's natural gas procurement signals industry shift from net-zero commitments
  • SMR companies (NuScale) gaining Tier 1 coverage as near-term data center power solutions
  • Grid policy and transmission permitting reform entering the narrative for the first time

Key voices

  • Utility executives
  • FERC commissioners
  • DOE officials
  • WSJ energy desk
  • Bloomberg Green
  • Energy reporters
Propagates toLayer 3 · Data Center Capacity·Estimated lag: 60 days

Layer 2 · FoundationAccelerating

Chip Supply & Architecture

Custom silicon is fragmenting the GPU monopoly. Meta, Amazon, and Google have all made public moves in the past 60 days. The training-versus-inference architectural split is now confirmed at the hardware layer.

87Velocity

Active signals

  • Meta announces 4 new AI chips in a direct competitive signal to NVIDIA and AMD
  • Amazon CEO signals company could sell AI chips externally, raising new competitive stakes
  • Google launches distinct chips for training and inference: architectural bifurcation confirmed at hardware level
  • Export controls creating bifurcated China/West supply chain narrative; Huawei emerging as alternate provider

Key voices

  • Dylan Patel / SemiAnalysis
  • Semiconductor analysts (Bernstein, Barclays)
  • CNBC Tech
  • Bloomberg Technology
  • Earnings call coverage
Propagates toLayer 5 · Training-to-Inference Shift·Estimated lag: 75 days

Layer 3 · MiddleActive

Data Center Capacity

Geographic diversification and resource scrutiny are reshaping where AI capacity gets built. Inland states and international markets are competing for hyperscaler investment on energy and incentives, not proximity to talent.

74Velocity

Active signals

  • Wyoming actively recruiting Google, Microsoft, and Meta with energy-availability incentives
  • Amazon commits nearly $40B for data center expansion in Spain: largest single international pledge
  • Water usage emerging as a second-order scrutiny narrative alongside power consumption
  • Adani eyes partnerships with Meta and Google, signaling emerging market buildout acceleration

Key voices

  • Real estate and infrastructure press
  • State economic development officials
  • Colocation executives
  • Bloomberg infrastructure desk
  • Reuters
Propagates toLayer 4 · Hyperscaler Strategy & Capex·Estimated lag: 45 days

Layer 4 · MiddleActive

Hyperscaler Strategy & Capex

The capex spend race is now in tension with investor demands for return. Environmental and resource scrutiny from Layers 1 and 3 is showing up in earnings call questioning for the first time.

68Velocity

Active signals

  • Microsoft described as “speeding up” in Big Tech's data center spend race: competitive acceleration framing
  • Investors pressing Google, Amazon, and Microsoft on water and energy use: Layer 1 signal arriving at Layer 4
  • Emerging market expansion (Spain $40B, India) read as cost-diversification signal, not pure growth
  • Return-on-capex questioning entering mainstream analyst coverage; no hyperscaler has answered it directly

Key voices

  • Wall Street analysts (Morgan Stanley, JPMorgan)
  • Bloomberg Intelligence
  • CNBC Squawk Box
  • FT Lex
  • Earnings transcript coverage
Propagates toLayer 5 · Training-to-Inference Shift·Estimated lag: 60 days

Layer 5 · InflectionAccelerating

Training-to-Inference Shift

The industry has moved from building models to running them, and the infrastructure requirements are fundamentally different. Google's simultaneous launch of separate training and inference chips is the structural confirmation event for this layer.

82Velocity

Active signals

  • Google launches separate AI chips for training and inference: two optimization regimes now exist at the hardware level
  • Edge inference platforms (Cloudflare AI edge, TuringEra SoC) growing narrative presence in enterprise and developer press
  • China AI deployment efficiency advantage entering Western coverage as a competitive frame
  • Efficiency-over-scale framing now competing with capability-over-cost in model coverage; DeepSeek effect persisting

Key voices

  • Chip Huyen / ML engineering Substack
  • Simon Willison
  • Semiconductor analysts
  • AWS / Azure / Cloudflare technical blogs
  • The Information
Propagates toLayer 6 · Enterprise Demand·Estimated lag: 75 days

Layer 6 · TopActive

Enterprise Demand

The ROI narrative has arrived. Enterprises are now expected to justify AI spend in measurable business outcomes. This frame was made possible by inference economics that were being determined at Layers 2 and 5 twelve to eighteen months ago.

63Velocity

Active signals

  • Snowflake AI report directly links enterprise ROI language to long-term demand forecasting
  • “The Enterprise AI ROI Era Has Arrived”: declarative framing appearing across Tier 1 business press in Q2 2026
  • Agentic AI for workforce productivity becoming the primary enterprise deployment narrative
  • Data quality and governance surfacing as the primary adoption blocker: a new friction narrative forming

Key voices

  • McKinsey / Deloitte / BCG research desks
  • Salesforce, ServiceNow earnings
  • CIO/CDO interview coverage
  • HBR / MIT Sloan
  • Fortune
Propagates toLayer 7 · Consumer Demand·Estimated lag: 90 days

Layer 7 · TopDeveloping

Consumer Demand

The consumer AI narrative is in transition. ChatGPT saturation is real. The next wave of consumer AI products is being shaped by inference economics still being determined below.

55Velocity

Active signals

  • AI assistant market fragmenting: voice, vision, code, and creative each developing distinct communities and press beats
  • Inference cost reduction enabling new categories of AI-native consumer applications at lower price points
  • On-device and privacy-first AI narrative gaining traction as a consumer trust concern
  • API pricing volatility creating developer market anxiety: a friction narrative the press is beginning to surface

Key voices

  • The Verge
  • Wired
  • Benedict Evans
  • Stratechery
  • App developer community
  • Consumer tech beat reporters

So what: most of these signals will become next quarter's framing one or two layers above their origin. The question is not whether they propagate. It is when.

Narrative propagation is measurable, not theoretical.

The 60-to-90-day lag between infrastructure narratives and application-layer framing is traceable in media data. These four propagation events are observed in the past 18 months — three already complete, one in active transit.

Cross-layer narrative propagation · last 18 months

How narratives travel up the stack: four traceable examples.

  1. EnergyHyperscaler Strategy

    Grid constraint produces capex discipline

    In 2024, energy journalists at WSJ, Bloomberg, and FT began reporting on data center power demand as a constraint on buildout. By Q1 2025, the same constraint had become the dominant frame in hyperscaler capex coverage: not “how much are they spending” but “can they actually build what they are committing to, given grid access.” The narrative did not originate in investor coverage. It arrived there from energy coverage, 60 to 75 days later.

    L1L4
    Observed lag65 days
  2. ChipsTraining-to-Inference

    Custom silicon produces architectural bifurcation

    Apple's Neural Engine and Google's TPU had been a technical narrative since 2022. In late 2024, as Meta, Amazon, and Broadcom began making larger custom silicon announcements, chip coverage began explicitly framing the training-versus-inference distinction for the first time. By Q1 2025, inference optimization was the dominant frame in model deployment coverage. Google's April 2026 announcement of separate training and inference chips is the confirmation event: the narrative is now architectural doctrine.

    L2L5
    Observed lag80 days
  3. Training-to-InferenceEnterprise

    Inference efficiency produces enterprise ROI expectation

    The DeepSeek efficiency disclosure in January 2025 made inference cost reduction a mainstream narrative at Layer 5. Enterprise journalists initially covered it as a China competitiveness story. By Q2 2025, the same efficiency data had been absorbed into enterprise coverage as a cost-basis expectation: if inference is cheap enough to be economically viable at scale, companies should be showing ROI. The “ROI Era Has Arrived” language appearing in Q2 2026 coverage is the delayed output of a narrative that ran at Layer 5 more than 12 months earlier.

    L5L6
    Observed lag75 to 90 days per cycle
  4. Data CenterHyperscaler Strategy

    Geographic diversification produces supply chain sovereignty narrative

    Data center geographic diversification (Wyoming, Spain, India) is currently running at Layer 3 as an infrastructure narrative. Within 45 to 60 days, it will arrive at the hyperscaler strategy layer as a narrative about supply chain sovereignty and geopolitical risk management. Companies that sit at Layer 4 or above and have not yet framed their infrastructure footprint in sovereignty terms are currently behind the narrative cycle.

    L3L4In transit
    Estimated arrivalQ3 2026 · Estimated lag: 45 to 60 days

Source: Shadow propagation analysis · Media data: Perigon News Intelligence · Timeline: Q3 2024 to April 2026.

Two narratives are in active transit as of April 2026. If you operate at the enterprise demand layer (Layer 6) and have not yet built inference-efficiency proof points into your positioning, you are 60 to 75 days from the moment journalists will expect them. If you operate at the hyperscaler strategy layer (Layer 4) and have not yet framed your buildout in sovereignty terms, the window is 45 to 60 days. Both windows are still open.

So what: there is a finite window between when a frame is visible below you and when it is expected of you.

What this means depending on where you sit.

The intelligence value of this report differs depending on where in the stack you operate. Three audiences, three different reads.

Layers 1–3 · Infrastructure comms

You are at the origin of the propagation cycle.

The narratives you manage now will frame your customers' businesses in 60 to 90 days. Most of your comms work is received at the top of the stack by audiences who do not understand infrastructure. Build translation assets now — the frame you are living in needs a version a CIO can receive. You have the window to define it before it arrives distorted.

Layers 5–7 · Application comms

You are in a receiving state.

The narratives arriving at your layer were visible in infrastructure media 60 to 90 days ago. Reporters covering your space have already been primed by coverage you probably did not read. Inference efficiency, the ROI demand, the sovereignty framing: all of it originated below you. Read infrastructure press weekly for narrative intelligence — the frame you will respond to in Q3 2026 is visible at Layers 1 and 2 in the April 2026 cut.

Agencies & consultants · Any AI stack client

The unit of analysis your clients are using is wrong.

“AI” as a single topic flattens a seven-layer system. Your clients sit on one floor. Their narratives do not originate with them, and their comms strategy should not be built as if they do. Map every client to their layer, then audit what is active two layers below them — that is their incoming narrative environment for Q3 2026.

Methodology

How we did this.

Researched and authored by Shadow.

Sources
Perigon News Intelligence API. Earned media only (non-news and paid-news labels excluded). English language. Reprints deduplicated.
Date range
60-day period ending April 30, 2026, with 18-month historical comparison for propagation analysis.
Sample
Five narrative-layer queries covering energy and grid, chip supply and architecture, data center capacity, hyperscaler strategy, training-to-inference dynamics, and enterprise and consumer demand. Total Perigon universe across all queries: ~5.2M articles.
Velocity scores
Shadow composite of coverage volume growth, source-tier concentration (Tier 1 penetration), and thematic concentration (share of layer coverage captured by the dominant narrative). Indexed to a 90-day trailing baseline. Scores are relative within the stack.
Propagation lag estimates
Derived from cross-layer narrative matching in media data, tracking when a frame first appeared at a lower layer and when it became dominant at the receiving layer. Estimates represent observed medians across three or more traceable propagation events per pathway, not point estimates.

Want this analysis for your category?

Shadow runs Narrative Cycle Intelligence across any market. Book a demo to see what the data looks like for your clients' space.