This is Part 2 of a two-part series on the state of AI. For the full data and analysis behind these shifts, start with Part 1: The 2025 Lookback.
Part 2 · You are here
AI-Driven Shifts for 2026
Major shifts for business, technology, and the web
The 2025 lookback covered the data behind last year's biggest AI shifts: ChatGPT scaling to 800 million weekly users, 95% of enterprise AI pilots failing to deliver ROI, reasoning models redefining what LLMs can do, and the growing tension between AI companies and the open web.
That tension was on full display at the India AI Summit this February. Leaders from across the tech industry gathered on stage for a show of unity, nearly all of them joining hands. The exceptions were hard to miss: Sam Altman and Dario Amodei, standing side by side but never joining in.
Video via Reuters . Sam Altman and Dario Amodei at the India AI Summit, February 2026.
That friction shows up in most of what follows. These shifts come from the patterns I'm watching across the industry and the work I do with this technology daily.
AI Platforms & Competition
The narrowing model performance gap, dramatic cost compression in inference pricing, and ChatGPT's growth to 800 million weekly active users all converged through 2025. Those dynamics, capability convergence meeting massive consumer scale, are reshaping where AI platforms compete and how they differentiate in 2026.
Agentic tools are expanding beyond coding
Today's most visible AI agents are coding-focused: OpenAI Codex, Claude Code, GitHub Copilot. But the pattern is already expanding. Gartner estimates that 40% of enterprise applications will include task-specific AI agents by end of 2026, up from less than 5% in 2025 (Gartner ). As these tools mature, agents are handling end-to-end workflows across creative, analytical, and operational domains.
Creative Production
Purpose-built agents for advertising, entertainment, and brand content. Full campaign production from brief to finished assets.
Data & Analysis
Agents that query databases, build visualizations, and present findings. Analysis workflows that currently take days, completed in minutes.
General Productivity & Operations
Cross-functional agents handling scheduling, research, document preparation, and operational workflows that connect tasks across teams and systems.
Coding agents have been a successful proof of concept: give AI a well-defined environment with clear feedback loops (code compiles or it doesn't), and it can operate with real autonomy. That same pattern is already showing up in creative production, data analysis, and structured business operations. Expect these offerings to expand and refine through the year as feedback loops in each domain become better defined.
Provider advantages are sharpening around data moats
Each provider's data moat reflects a fundamental positioning choice: who the product is primarily built for. Some platforms lean toward personal exploration and creative play, others toward professional productivity. That positioning shapes which data advantages each provider pursues and where their competitive edges are sharpest.
Provider focus
Here's how those data advantages and positioning choices play out for each of the five major providers.
OpenAI
Center of the spectrum, leaning personal. Consumer dominance via ChatGPT is its foundation, but expanding into enterprise creates tension with Microsoft, its largest investor.
Unique advantages
- ● 900M weekly active users generating massive interaction data
- ● Microsoft/Azure enterprise distribution
- ● Consumer familiarity creating a pipeline to enterprise adoption
- ● Largest third-party developer ecosystem (API, GPTs, plugins)
Spans the full spectrum. Consumer products and Workspace anchor both ends, with every surface feeding data back into AI models.
Unique advantages
- ● YouTube, Maps, and Search: decades of video, spatial, and crawl data
- ● Custom TPU silicon (Trillium), reducing Nvidia GPU dependence
- ● 1.5B monthly AI Overview users in the ecosystem
- ● Android and Chrome: device-level integration across billions of users
Anthropic
Furthest toward work. Developer-first tooling and no consumer entertainment play. Benefits as a less conflicted enterprise partner as OpenAI competes with Microsoft.
Unique advantages
- ● AWS Bedrock distribution within existing cloud contracts
- ● Code-first positioning and developer ecosystem traction
- ● Research-driven credibility (Economic Index, safety publications)
- ● Governance focus attractive to government, education, and NGOs
xAI
Deep personal side. Real-time X integration, image generation, and social platform embedding make it the most consumer-entertainment-focused of the five.
Unique advantages
- ● Real-time X/Twitter data firehose and social graph
- ● Breaking news and live event speed advantage
- ● Grok Code models popular with indie developers via OpenRouter
- ● Musk ecosystem: Starlink (4M+ subscribers), Tesla fleet data
Meta
Furthest toward personal. AI embedded across 3.6B users on Facebook, Instagram, WhatsApp, and Threads, with Llama open weights as a long-term bet on ecosystem influence.
Unique advantages
- ● 3.6B+ users generating behavioral and engagement data
- ● Open-weight Llama models influencing the AI market
- ● AI integration across messaging, social, and commerce at global scale
- ● WhatsApp reach in emerging markets as an AI distribution channel
Not every major player is leading with a frontier model. Some are betting on distribution, infrastructure, and ecosystem integration instead.
Platform giants without frontier models
Three of the largest technology companies have massive distribution, cloud infrastructure, and custom AI silicon but have not yet produced a competitive frontier model. If any of them did, the combination of existing reach and infrastructure could reshape the competitive picture quickly.
Amazon
AWS cloud dominance, custom Trainium AI chips, and Bedrock as a distribution layer for third-party models.
Apple
Custom silicon with NPUs across 2B+ active devices, on-device AI leadership, and deep consumer and small business ecosystem integration.
Microsoft
Azure cloud infrastructure, Copilot integration across Office, and enterprise distribution, though now complicated by OpenAI competition.
These moats matter more as cost compression continues to erode pricing power. When inference costs approach zero, the question stops being "which model is cheapest?" and becomes "which model has access to data that others don't?"
Signals to watch
Agentic solutions expanding into creative, analytics, and operational domains beyond coding. Providers investing in features, products, and partnerships that build on their unique data moats and ecosystem advantages.
Consumer AI
Standalone AI apps dominated 2025, but the bigger shift in 2026 is AI embedding itself into products people already use. Browsers, operating systems, and everyday apps are all gaining AI features through regular updates. That changes how AI reaches people and how companies monetize it.
AI reach is expanding beyond standalone apps
The explosive growth of standalone AI apps defined 2025: ChatGPT scaling to 800 million weekly users, Google AI Overviews reaching 1.5 billion monthly users, and a wave of AI-native browsers launching. But the more consequential shift in 2026 is AI disappearing into surfaces people already use. When AI ships as a software update rather than a new download, the addressable user base stops being measurable by app installs.
Built Into Browsers
- ● Chrome and Edge already ship AI sidebars
- ● AI-native browsers launching (Perplexity, OpenAI)
- ● Features expanding, new entrants arriving
- ● Becoming the AI interface, not just a portal to one
Embedded in Software
- ● Copilot in Office, iPhones getting Apple Intelligence
- ● AI search in apps like Instagram and Reddit
- ● AI video and creative tools in Adobe, TikTok
- ● Arriving in updates for existing apps
Running On-Device
- ● Dedicated AI chips run smaller language models locally
- ● iPhones handle summarization and writing on-device
- ● Faster responses, better privacy, works offline
- ● Nvidia investing $5B to bring AI chiplets to PCs
The on-device side of this shift is advancing faster than most people realize. Apple Intelligence already runs ~3 billion parameter models entirely on-device, far smaller than the hundreds-of-billions-scale cloud models powering ChatGPT and Gemini, but capable enough for summarization, writing assistance, and quick answers. Dedicated AI chips in flagship phones and laptops now exceed 70 TOPS, handling real-time language tasks without a network connection. Nvidia's $5 billion investment in Intel will put RTX chiplets with dedicated AI processing into Intel system-on-chips, making AI-capable PCs far more mainstream (Nvidia ). The tradeoff is memory: on-device models must fit within a phone or laptop's limited RAM. But for tasks where privacy and speed matter more than raw capability, they're already good enough, and getting better fast.
When AI ships as a default feature in iOS, as a sidebar in Chrome, and as a built-in browser capability, the definition of "AI user" changes. It stops being people who chose to download ChatGPT and starts being anyone who picks up a phone or opens a laptop. This ubiquity is what creates the monetization pressure covered in the next section, and it's also what drives the bottom-up enterprise adoption pattern: workers who use AI through everyday personal software expect the same capabilities at work.
Ads are arriving in AI platforms
AI platforms targeting personal and casual usage will need to adopt advertising to sustain their business models. The audiences are massive, with 900 million weekly active users on ChatGPT and 1.5 billion monthly users on Google's AI Overviews, but the majority of those users are on free or low-cost tiers. Scale plus limited willingness to pay follows the same arc as every major consumer technology platform before it. OpenAI has already signaled plans to introduce ads into ChatGPT, and Google is expected to embed ads in Gemini and AI Mode responses.
Platforms positioned toward professional and enterprise use have a different path. Higher-capability tiers, enterprise contracts, and emerging models like agentic completion pricing (covered further in Enterprise & Workforce) let them sustain revenue without ads. Anthropic's Super Bowl LIX campaign made this distinction explicit, running satirical spots showing AI chatbots steering users toward sponsored products and framing Claude as the ad-free alternative (CNBC ).
Anthropic Super Bowl LIX, Feb 2026 (YouTube )
Expect AI platform advertising to emerge as its own category alongside traditional search and display ads. Conversational AI surfaces lend themselves to contextual recommendations, sponsored answers, and product integrations. The divide between ad-supported and subscription-supported AI will mirror similar splits across media, music, and software.
Signals to watch
AI capabilities continuing to embed into mainstream software, browsers, and operating systems, broadening everyday familiarity with the technology. Ad-supported access launching for AI platforms targeting personal and casual usage in 2026, following the same monetization arc as every major consumer platform before it.
Enterprise & Workforce
Most organizations entered 2026 in the same position they ended 2025: investing in AI while struggling to make it work at scale. The bottleneck isn't the technology. It's the gap between what AI can do and what companies are structured to adopt, and the workforce shifts that follow.
Most enterprise AI pilots still struggle to scale
The bottleneck is not the models. It's organizational readiness. In 2025, 95% of generative AI pilots failed to deliver measurable ROI. MIT found the root cause is a "learning gap" in enterprise integration, not model quality (MIT ). Data governance, workflow integration, and change management remain the real blockers. Expect that success rate to stay below 10% through 2026.
Rather than building custom AI deployments from scratch, the easiest way for enterprises to bring AI further online is enabling workers to use the consumer AI platforms they've already adopted outside of work. ChatGPT, Claude, and Gemini are already familiar tools to hundreds of millions of people. As I explored in From Pilots to Production, the companies that break through are the ones that formalize the bottom-up adoption already happening rather than imposing top-down AI strategies their workforce isn't ready for. OpenAI's launch of Frontier , an enterprise agent platform backed by consulting alliances with BCG, McKinsey, and Accenture, shows how seriously providers are investing in closing that adoption gap.
The pilot failure rate is also creating pressure on how AI gets priced. CFOs who signed off on six-figure AI contracts and saw little return are demanding more accountability before re-upping. The data to support that accountability is starting to exist. OpenAI's GDPval benchmark , which measures AI performance on 1,320 real-world tasks across 44 occupations in 9 GDP-contributing industries, found that GPT-5.2 outperforms human experts on 70.9% of tasks while running 100x faster at less than 1% of the cost. Those numbers give procurement teams something concrete to work with.
70.9%
Tasks where AI outperforms human experts
100x
Faster than human completion
<1%
Of the cost per task
GDPval: 1,320 tasks across 44 occupations in 9 GDP-contributing industries
As agentic tools become more capable of delivering complete workflows, expect pricing to shift toward outcome-based models: pay for results delivered, not seats licensed or tokens consumed. Benchmarks like GDPval make this possible by giving both buyers and vendors a shared language for measuring AI's economic contribution. Vendors that can tie their pricing to measurable business outcomes have an easier time getting through procurement than those still selling on potential.
Hiring is shifting from task-based roles toward strategic ones
As agents take on more defined, repeatable work, the roles built around executing those tasks face real pressure. The Anthropic Economic Index data from 2025 shows that 52% of AI usage is augmentation, and complex tasks saw 12x speedups compared to 9x for simpler ones. AI is already disproportionately accelerating higher-complexity work.
Declining demand
- ● Data entry and routine reporting
- ● Templated content production
- ● Basic research and summarization
- ● Standard code implementation
Rising demand
- ● Agent orchestration and oversight
- ● Cross-functional AI integration
- ● Domain expertise and knowledge architecture
- ● Strategic judgment and decision-making
This isn't a mass displacement event, but hiring patterns are visibly shifting. Code creation already doubled as a share of AI conversations in 2025, and mid-to-high wage knowledge workers showed the highest adoption rates. Organizations that redefine roles around strategic judgment and AI collaboration move faster than those still hiring for work an agent can handle.
Signals to watch
The majority of knowledge workers gaining access to familiar AI tools like ChatGPT, Gemini, Copilot, and Claude through their employers, accelerating bottom-up adoption. Job posting language shifting from task execution to AI orchestration, and whether "AI-native" roles appear in standard job taxonomies.
Regulation & Infrastructure
AI capability is advancing faster than the policy frameworks and physical infrastructure trying to keep up. Regulatory fragmentation across jurisdictions, constrained GPU supply, surging energy demands, and memory price spikes all create friction that no single organization can resolve on its own. These are the external forces shaping what's possible in 2026.
Regulatory fragmentation is adding compliance cost without clarity
The 2025 lookback documented the growing regulatory patchwork: all 50 U.S. states introduced AI-related legislation, 38 enacted it, and Congress still hasn't passed federal legislation to preempt any of it. Meanwhile, the EU AI Act's high-risk provisions covering AI used in hiring, credit scoring, and law enforcement take effect in 2026.
Federal
Innovation
Infrastructure
Leadership
- ● Executive branch moves fast through AI Action Plan
- ● Congress has yet to pass federal AI legislation
- ● Executive orders shift with administrations
US States
- ● Congress has yet to pass unifying federal legislation
- ● Overlapping, sometimes contradictory requirements
- ● Companies face growing compliance complexity
European Union
- ● Early phases active, high-risk provisions arrive Aug 2026
- ● Fines up to 7% of annual revenue
- ● Becoming the global standard, as GDPR did for privacy
The most likely outcome mirrors what happened with GDPR. The promise was meaningful user control over personal data. In practice, it produced cookie consent banners that almost nobody reads and compliance costs that fell on businesses without delivering proportional value to users. AI regulation is following the same pattern: companies are building governance frameworks, hiring compliance teams, and adding disclosures to satisfy overlapping requirements, while the technology advances faster than the policy frameworks trying to govern it.
The practical takeaway: nimbleness matters more than perfection. Regulations keep shifting across jurisdictions, and waiting for a stable framework means falling behind. Organizations that build compliance into their operating rhythm rather than treating it as a one-time project are better positioned to adapt as the rules change under them.
Regulation is also moving beyond enterprise compliance into areas that affect individuals directly. Multiple states now require disclosure when AI generates or substantially alters content shown to consumers. Deepfake legislation is accelerating, with new laws targeting synthetic media used without consent. Content labeling and provenance tracking are gaining legislative momentum faster than almost any other category of AI regulation.
AI regulation timeline
2024-25
State laws multiply, executive orders shape compliance expectations.
Aug 2026
EU AI Act high-risk enforcement begins.
2026 full year
Election-cycle scrutiny on AI-generated content.
TBD
Unified federal AI legislation from Congress.
Without federal AI legislation from Congress, states and the EU are filling the vacuum unevenly. The executive branch has moved through its AI Action Plan, but executive orders shift with administrations and don't carry the permanence of legislation. Whether this patchwork eventually consolidates into something functional or remains permanently fragmented is one of the bigger open questions heading into the back half of 2026.
On-prem AI demand is growing, but the window hasn't opened yet
As open-weight models become more capable, the idea of running AI on your own infrastructure is gaining traction among some technology leaders and practitioners, even if most enterprises haven't reached that conversation yet. Many organizations are still in the early stages of rolling out cloud-based AI tools to their workforce. But as adoption scales and cloud subscription costs grow, on-prem AI stands to become a more serious consideration for addressing privacy concerns, regulatory pressure from the EU AI Act, and long-term cost management.
The capability gap that once made on-prem a non-starter is narrowing fast. On the Artificial Analysis Intelligence Index (updated to v4.0 with harder benchmarks in early 2026, so scores are not comparable to Part 1), the top open-weight models now average within 7 points of the top proprietary models. As open-weight models continue to improve, the potential for enterprise workloads to be managed by self-hosted AI becomes more viable.
Intelligence Index v4.0: Top 3 Avg
But this shift is showing signals without materializing fully. On-prem AI won't see meaningful adoption until two conditions are met: AI usage becomes the norm for the majority of enterprise workers, and hardware becomes more accessible. As covered in the 2025 lookback, GPU supply remains constrained with next-generation chips sold out 12+ months ahead, and memory prices surged from $110 to $530 per 32GB kit over the past year.
Infrastructure compounds the hardware challenge. U.S. data center energy consumption is projected to grow over 130% by 2030, and some regions face 7-year waiting lists for new grid connections. The hardware bottleneck is also shifting: for inference workloads, memory bandwidth, how fast the system can feed data to the processor, is becoming more of a constraint than raw compute. A model that fits in memory runs fast. One that doesn't collapses in performance. For enterprises evaluating on-prem, the cost per gigabyte of high-bandwidth memory matters as much as the cost per GPU.
Cloud providers are capturing the majority of AI compute by default, simply because they can absorb these infrastructure costs at scale. The gap between on-prem ambition and on-prem execution remains wide, making this more likely a 2027 or 2028 story.
2026: Signals
- ● Employee AI demand accelerates
- ● Open-weight models recognized as capable for standard tasks
- ● Growing adoption raises data privacy and safeguarding concerns
2027-28: Action
- ● New production capacity begins easing supply pressure
- ● Enterprise AI adoption reaches majority of workers
- ● On-prem becomes a regular pursuit, especially for privacy-sensitive enterprises
On-device AI, covered earlier, is solving the same problem at the individual level: moving compute closer to where work happens. The line between personal on-device AI and enterprise on-prem AI is blurring as employees bring AI-capable hardware into workplaces that haven't formalized their own infrastructure yet.
Model portability also deserves attention. The Artificial Analysis Intelligence Index shows that no single provider consistently leads across all task types. Organizations that architect their AI integrations to swap models without reworking their stack have a significant advantage as capabilities and pricing evolve.
Signals to watch
Whether Congress passes federal AI legislation that unifies the state-by-state patchwork. The first EU AI Act enforcement actions and their ripple effects on global compliance. GPU and memory pricing trends signaling when on-prem AI becomes practical at scale.
The Open Web
Of everything covered in the 2025 lookback, AI and the open web hit closest to my day-to-day work. The crawl-to-click imbalance, the content quality arms race, and the growing divide between publishers blocking AI and those optimizing for discoverability all sharpened through the year. Both sides of that divide are intensifying.
Open web tension is getting worse before it gets better
The data from 2025 paints a clear picture: AI companies are consuming publisher content at scale while sending almost nothing back. OpenAI's crawlers make roughly 1,400 requests for every referral click sent back to publishers. For Anthropic, the ratio is nearly 71,000 to one (Cloudflare ). Humans now account for just 43.5% of web traffic hitting major networks, and that share continues to decline.
But the response from the open web isn't uniform. The web is splitting into two camps, and that divide is becoming more visible. The partnership camp is where the more interesting dynamics are playing out.
Blocking
Aggressive robots.txt rules
Blanket AI bot blocking across publisher sites
Copyright lawsuits
NYT, Tribune vs. Perplexity, and growing legal action
Legislative pressure
Proposed licensing requirements for AI training data
Paywalls and content gating
Restricting access to force direct engagement
Partnering
llms.txt and structured AI protocols
112 → 2,000+ sites and accelerating
Licensing deals with AI platforms
Revenue-sharing agreements formalizing content access
Content optimized for AI discovery
Structured data, entity clarity, and citation-ready formats
Strategic platform alignment
Enterprise support partnerships and strategic alignment
Expect continued lawsuits and more aggressive blocking from the resistance camp, alongside possible new legislation, though technology will likely outpace policy. On the partnership side, organizations are formalizing their relationships with AI platforms. Aligning with AI signals relevance to investors and boards, mirrors the adoption happening inside their own workforce, and positions them for a distribution channel growing faster than any other. As AI platforms develop more structured ways to compensate content providers, the economics of cooperation are becoming clearer.
AI visibility and AI experience are becoming specializations
Optimizing for how AI models discover and reference your content was an afterthought a year ago. It now sits alongside traditional SEO as a recognized practice area, with AEO (Answer Engine Optimization), GEO (Generative Engine Optimization), and the broader umbrella term AISO (AI Search Optimization) appearing across thousands of job postings on Indeed and LinkedIn. But the demand extends well beyond search optimization. Roles like Conversational UX Designer and GenAI Product Manager reflect a broader shift: both job responsibilities and entirely new positions are increasingly requiring AI fluency, reinforcing the enterprise workforce trends covered earlier.
The data supports what I'm seeing in practice: AI-related job postings have grown +134% since 2020, even as total postings grew just 6% over the same period. AI visibility, AI-native product design, and conversational experience roles are all evolving from niche add-ons into recognized specializations.
Organizations already doing SEO well are better positioned than they think, as I argue in How to Improve AI Visibility, but the gap between those who invest in AI discovery and experience early and those who wait is widening fast.
Job postings mentioning AI (Indeed )
Signals to watch
Crawl-to-click ratios and major lawsuit outcomes shaping the blocking vs. partnering divide. AI visibility and AI experience roles appearing in standard job taxonomies, and enterprise budgets allocating for AI discoverability and conversational product design.
The 2025 lookback documented where AI stood at the end of last year: explosive consumer adoption, a narrowing model performance gap, enterprise pilots that mostly failed to deliver, regulations fragmenting faster than they're unifying, and an open web caught between AI's appetite for content and publishers deciding how to respond.
These shifts follow from that data. AI platforms are consolidating around data advantages and monetizing their massive user bases. Enterprises are working to get AI tools into the hands of their workforce while simultaneously incorporating AI into their operational workflows and customer-facing products. The open web continues to split between those blocking AI and those building partnerships with it, while new specializations emerge around AI visibility and AI-driven experiences.
Some of these shifts will hold, some won't, and the ones that surprise us will probably be more interesting than the ones we got right.
The 2025 Lookback
The data and analysis behind 2025's biggest shifts in models, hardware, adoption, enterprise use, regulation, and the open web.
Read Part 1Connect on LinkedIn
Found these insights interesting? I'd enjoy hearing your take. Let me know what resonated or where you see things differently.
Connect on LinkedIn