Why it matters
Comms teams across AI keep optimizing for the wrong narrative environment. They treat “AI” as a single topic. It is a system of seven layers, each with its own press cycle, voices, and framing conventions. The frame your reporters will use next quarter is already visible two layers below you as of April 2026.
Seven layers. Seven narrative environments.
Each layer operates on its own media cycle, with its own dominant voices and framing conventions. Narratives do not originate simultaneously across layers. They emerge from constraints in lower layers and propagate upward. The map below orders them top to bottom: where they sit in the stack, what state each is in, and how much narrative momentum each is generating right now.
The AI Infrastructure Stack · Narrative Map
Seven layers. Seven narrative environments.
Each receives from the one below it.
Consumer Demand
DevelopingAI product surface fragmenting into specialized verticals; no dominant platform narrative established.
receives from below ↓
Enterprise Demand
ActiveROI-first framing arrives; agentic AI for workforce becomes the primary deployment narrative.
receives from below ↓
Training-to-Inference Shift
AcceleratingArchitectural bifurcation confirmed; edge inference emerging as the next buildout wave.
receives from below ↓
Hyperscaler Strategy & Capex
ActiveCapex spend race collides with investor ROI demands; geographic diversification in progress.
receives from below ↓
Data Center Capacity
ActiveGeographic expansion and resource scrutiny (power, water) reshaping where and how capacity is built.
receives from below ↓
Chip Supply & Architecture
AcceleratingCustom silicon fragmenting GPU monopoly; training vs. inference chip architectures bifurcating.
receives from below ↓
Energy & Power Grid
AcceleratingAI power demand testing national grid capacity; nuclear and natural gas displacing renewables-first narrative.
Layers ordered top to bottom (highest to lowest in the stack) · Status as of April 2026 · Velocity scores relative to 90-day baseline.
Three of the seven are accelerating, and all three sit in the infrastructure half of the stack. The application layers are in a receiving state, absorbing narratives that were already well-established in infrastructure coverage 60 to 90 days ago.
So what: the unit of analysis is the layer, not the topic.
Infrastructure layers are generating the most momentum.
Three of the seven layers are accelerating right now, and all three sit in the infrastructure half of the stack: Energy & Power Grid (Layer 1), Chip Supply & Architecture (Layer 2), and the Training-to-Inference Shift (Layer 5). The application layers — Enterprise and Consumer — are the slowest movers in the set. We measure that with a velocity score, defined below.
Velocity score · how it is computed
Three signals, one composite, indexed to a 90-day baseline.
Each layer's velocity score is a weighted blend of three independent measurements taken from the past 60 days of earned coverage, then indexed against the same layer's trailing 90-day baseline. The result is a 0–100 number where 100 is the highest level the layer has reached in that window.
- Volume growth · 40%
- Quarter-over-quarter change in narrative-tagged article count for the layer, normalized against the layer's own 90-day mean. Captures how fast coverage is intensifying.
- Source-tier concentration · 30%
- Share of layer coverage appearing in Tier 1 outlets (WSJ, FT, Bloomberg, NYT, The Information, Reuters, major trade press). Captures how seriously the editorial center is treating the layer.
- Thematic concentration · 30%
- Share of layer coverage captured by the single dominant narrative within the layer (the top narrative's share of layer-tagged articles). Captures how unified the framing is.
Reading a score
A dominant frame is consolidating fast in Tier 1 coverage.
Coverage is sustained, framing is contested or in flux.
Coverage exists, but no narrative owns the layer yet.
By that measure: Energy (91), Chips (87), and the Training-to-Inference shift (82) lead. Hyperscaler (68), Data Center (74), and Enterprise (63) sit in the active band. Consumer (55) is the only layer in the developing band — high coverage, but no consolidating story.
Narrative velocity score by stack layer · 90-day trailing baseline
Infrastructure layers are generating the most narrative momentum.
Source: Perigon News Intelligence · Shadow narrative velocity analysis · as of April 30, 2026.
By the time a narrative reaches the application layer, it has been running in infrastructure media for 60 to 90 days. The velocity ranking is the propagation thesis in static form: momentum originates below and dissipates upward.
So what: if you operate above Layer 4, your incoming narratives are already visible in coverage you are probably not reading.
The cycles rise and fall layer by layer.
Velocity over the past seven quarters, plotted layer by layer. Energy peaked early, dipped, and is re-accelerating. The chip cycle inflected sharply in the past two quarters. Training-to- inference broke open after DeepSeek in Q1 2025. Enterprise and consumer demand have only just begun to climb.
Velocity score by layer · Q3 2024 – Q1 2026 · 7-quarter trajectory
The cycle peaks arrive at lower layers first, then migrate up.
Source: Perigon News Intelligence · Shadow narrative velocity analysis. Each row uses a shared 0–100 scale.
Read the chart bottom-up: the cycle apex marker (the larger dot) arrives at lower layers first and migrates upward. That is the propagation thesis as a moving picture. Each peak you see at the bottom is roughly the next peak coming at the top, 60 to 90 days out.
High-volume layers and high-velocity layers are not the same.
Velocity is one half of the picture. Volume is the other. The panels below plot quarterly article volume per layer, ordered by Q1 2026 size. Layers can end the window at the same headline number with completely different trajectories underneath.
Quarterly article volume by layer · Q3 2024 – Q1 2026 · thousands of articles
Volume tells a different story than velocity.
One panel per layer, ordered by Q1 2026 volume. Each panel uses an independent y-scale so the shape reads on its own — the K value and growth multiple show absolute size.
Source: Perigon News Intelligence · Shadow analysis. Total across the stack, Q1 '26: 774K articles.
Consumer Demand (L7) and Chip Supply (L2) both end Q1 2026 at 132K articles. They got there in completely different ways. Consumer crawled — 118K to 132K, +12% across seven quarters, with a dip in the middle. Chips climbed steeply, 52K to 132K, a 2.5× run with most of the gain in the past three quarters. Endpoint says they are the same size; trajectory says they are opposite stories. The fastest absolute climbers are Training-to-Inference (5.9× from a small base), Data Center Capacity (3.7×), and Chip Supply (2.5×) — the same three layers leading on velocity.
So what: a layer's endpoint hides what it took to get there. Trajectory is the volume signal worth tracking.
The dominant narrative in each layer, and what it sends upward.
For each layer: the current dominant narrative, its key voices, the active signals driving it, and the propagation destination and estimated lag. Read bottom-up — that is the direction the narratives are traveling.
Layer 1 · Foundation•Accelerating
Energy & Power Grid
AI power demand is testing grid capacity at a national scale, and the renewables-first narrative is giving way to nuclear and natural gas as the near-term solution frame.
Active signals
- AI data center power demand compared to South Dakota's entire grid output
- Meta's natural gas procurement signals industry shift from net-zero commitments
- SMR companies (NuScale) gaining Tier 1 coverage as near-term data center power solutions
- Grid policy and transmission permitting reform entering the narrative for the first time
Key voices
- Utility executives
- FERC commissioners
- DOE officials
- WSJ energy desk
- Bloomberg Green
- Energy reporters
Layer 2 · Foundation•Accelerating
Chip Supply & Architecture
Custom silicon is fragmenting the GPU monopoly. Meta, Amazon, and Google have all made public moves in the past 60 days. The training-versus-inference architectural split is now confirmed at the hardware layer.
Active signals
- Meta announces 4 new AI chips in a direct competitive signal to NVIDIA and AMD
- Amazon CEO signals company could sell AI chips externally, raising new competitive stakes
- Google launches distinct chips for training and inference: architectural bifurcation confirmed at hardware level
- Export controls creating bifurcated China/West supply chain narrative; Huawei emerging as alternate provider
Key voices
- Dylan Patel / SemiAnalysis
- Semiconductor analysts (Bernstein, Barclays)
- CNBC Tech
- Bloomberg Technology
- Earnings call coverage
Layer 3 · Middle•Active
Data Center Capacity
Geographic diversification and resource scrutiny are reshaping where AI capacity gets built. Inland states and international markets are competing for hyperscaler investment on energy and incentives, not proximity to talent.
Active signals
- Wyoming actively recruiting Google, Microsoft, and Meta with energy-availability incentives
- Amazon commits nearly $40B for data center expansion in Spain: largest single international pledge
- Water usage emerging as a second-order scrutiny narrative alongside power consumption
- Adani eyes partnerships with Meta and Google, signaling emerging market buildout acceleration
Key voices
- Real estate and infrastructure press
- State economic development officials
- Colocation executives
- Bloomberg infrastructure desk
- Reuters
Layer 4 · Middle•Active
Hyperscaler Strategy & Capex
The capex spend race is now in tension with investor demands for return. Environmental and resource scrutiny from Layers 1 and 3 is showing up in earnings call questioning for the first time.
Active signals
- Microsoft described as “speeding up” in Big Tech's data center spend race: competitive acceleration framing
- Investors pressing Google, Amazon, and Microsoft on water and energy use: Layer 1 signal arriving at Layer 4
- Emerging market expansion (Spain $40B, India) read as cost-diversification signal, not pure growth
- Return-on-capex questioning entering mainstream analyst coverage; no hyperscaler has answered it directly
Key voices
- Wall Street analysts (Morgan Stanley, JPMorgan)
- Bloomberg Intelligence
- CNBC Squawk Box
- FT Lex
- Earnings transcript coverage
Layer 5 · Inflection•Accelerating
Training-to-Inference Shift
The industry has moved from building models to running them, and the infrastructure requirements are fundamentally different. Google's simultaneous launch of separate training and inference chips is the structural confirmation event for this layer.
Active signals
- Google launches separate AI chips for training and inference: two optimization regimes now exist at the hardware level
- Edge inference platforms (Cloudflare AI edge, TuringEra SoC) growing narrative presence in enterprise and developer press
- China AI deployment efficiency advantage entering Western coverage as a competitive frame
- Efficiency-over-scale framing now competing with capability-over-cost in model coverage; DeepSeek effect persisting
Key voices
- Chip Huyen / ML engineering Substack
- Simon Willison
- Semiconductor analysts
- AWS / Azure / Cloudflare technical blogs
- The Information
Layer 6 · Top•Active
Enterprise Demand
The ROI narrative has arrived. Enterprises are now expected to justify AI spend in measurable business outcomes. This frame was made possible by inference economics that were being determined at Layers 2 and 5 twelve to eighteen months ago.
Active signals
- Snowflake AI report directly links enterprise ROI language to long-term demand forecasting
- “The Enterprise AI ROI Era Has Arrived”: declarative framing appearing across Tier 1 business press in Q2 2026
- Agentic AI for workforce productivity becoming the primary enterprise deployment narrative
- Data quality and governance surfacing as the primary adoption blocker: a new friction narrative forming
Key voices
- McKinsey / Deloitte / BCG research desks
- Salesforce, ServiceNow earnings
- CIO/CDO interview coverage
- HBR / MIT Sloan
- Fortune
Layer 7 · Top•Developing
Consumer Demand
The consumer AI narrative is in transition. ChatGPT saturation is real. The next wave of consumer AI products is being shaped by inference economics still being determined below.
Active signals
- AI assistant market fragmenting: voice, vision, code, and creative each developing distinct communities and press beats
- Inference cost reduction enabling new categories of AI-native consumer applications at lower price points
- On-device and privacy-first AI narrative gaining traction as a consumer trust concern
- API pricing volatility creating developer market anxiety: a friction narrative the press is beginning to surface
Key voices
- The Verge
- Wired
- Benedict Evans
- Stratechery
- App developer community
- Consumer tech beat reporters
So what: most of these signals will become next quarter's framing one or two layers above their origin. The question is not whether they propagate. It is when.
Narrative propagation is measurable, not theoretical.
The 60-to-90-day lag between infrastructure narratives and application-layer framing is traceable in media data. These four propagation events are observed in the past 18 months — three already complete, one in active transit.
Cross-layer narrative propagation · last 18 months
How narratives travel up the stack: four traceable examples.
EnergyHyperscaler Strategy
Grid constraint produces capex discipline
In 2024, energy journalists at WSJ, Bloomberg, and FT began reporting on data center power demand as a constraint on buildout. By Q1 2025, the same constraint had become the dominant frame in hyperscaler capex coverage: not “how much are they spending” but “can they actually build what they are committing to, given grid access.” The narrative did not originate in investor coverage. It arrived there from energy coverage, 60 to 75 days later.
L1L4Observed lag65 daysChipsTraining-to-Inference
Custom silicon produces architectural bifurcation
Apple's Neural Engine and Google's TPU had been a technical narrative since 2022. In late 2024, as Meta, Amazon, and Broadcom began making larger custom silicon announcements, chip coverage began explicitly framing the training-versus-inference distinction for the first time. By Q1 2025, inference optimization was the dominant frame in model deployment coverage. Google's April 2026 announcement of separate training and inference chips is the confirmation event: the narrative is now architectural doctrine.
L2L5Observed lag80 daysTraining-to-InferenceEnterprise
Inference efficiency produces enterprise ROI expectation
The DeepSeek efficiency disclosure in January 2025 made inference cost reduction a mainstream narrative at Layer 5. Enterprise journalists initially covered it as a China competitiveness story. By Q2 2025, the same efficiency data had been absorbed into enterprise coverage as a cost-basis expectation: if inference is cheap enough to be economically viable at scale, companies should be showing ROI. The “ROI Era Has Arrived” language appearing in Q2 2026 coverage is the delayed output of a narrative that ran at Layer 5 more than 12 months earlier.
L5L6Observed lag75 to 90 days per cycleData CenterHyperscaler Strategy
Geographic diversification produces supply chain sovereignty narrative
Data center geographic diversification (Wyoming, Spain, India) is currently running at Layer 3 as an infrastructure narrative. Within 45 to 60 days, it will arrive at the hyperscaler strategy layer as a narrative about supply chain sovereignty and geopolitical risk management. Companies that sit at Layer 4 or above and have not yet framed their infrastructure footprint in sovereignty terms are currently behind the narrative cycle.
L3L4In transitEstimated arrivalQ3 2026 · Estimated lag: 45 to 60 days
Source: Shadow propagation analysis · Media data: Perigon News Intelligence · Timeline: Q3 2024 to April 2026.
Two narratives are in active transit as of April 2026. If you operate at the enterprise demand layer (Layer 6) and have not yet built inference-efficiency proof points into your positioning, you are 60 to 75 days from the moment journalists will expect them. If you operate at the hyperscaler strategy layer (Layer 4) and have not yet framed your buildout in sovereignty terms, the window is 45 to 60 days. Both windows are still open.
So what: there is a finite window between when a frame is visible below you and when it is expected of you.
What this means depending on where you sit.
The intelligence value of this report differs depending on where in the stack you operate. Three audiences, three different reads.
Layers 1–3 · Infrastructure comms
You are at the origin of the propagation cycle.
The narratives you manage now will frame your customers' businesses in 60 to 90 days. Most of your comms work is received at the top of the stack by audiences who do not understand infrastructure. Build translation assets now — the frame you are living in needs a version a CIO can receive. You have the window to define it before it arrives distorted.
Layers 5–7 · Application comms
You are in a receiving state.
The narratives arriving at your layer were visible in infrastructure media 60 to 90 days ago. Reporters covering your space have already been primed by coverage you probably did not read. Inference efficiency, the ROI demand, the sovereignty framing: all of it originated below you. Read infrastructure press weekly for narrative intelligence — the frame you will respond to in Q3 2026 is visible at Layers 1 and 2 in the April 2026 cut.
Agencies & consultants · Any AI stack client
The unit of analysis your clients are using is wrong.
“AI” as a single topic flattens a seven-layer system. Your clients sit on one floor. Their narratives do not originate with them, and their comms strategy should not be built as if they do. Map every client to their layer, then audit what is active two layers below them — that is their incoming narrative environment for Q3 2026.