One Person Unicorn

Back to Posts

The Age of Agents: The End of AI Hype and the Economics of Infighting (2025-2028)

CodingoAI

Executive Summary

As of September 2025, the generative AI gold rush is over, and a pragmatic, sometimes ruthless, war for tangible value and market dominance has begun. The initial hype has been replaced by a cold focus on return on investment (ROI), efficiency, and defensible business models. The next three years (2026-2028) will be defined by three pivotal forces:

The Rise of Agentic AI: The shift from passive content generation to active, autonomous AI agents is a new technological frontier, fundamentally reshaping workflows and creating a new class of ‘virtual colleagues.’

Geopolitical Decoupling: The escalating US-China AI rivalry is no longer a backdrop but a central driver of strategy, fragmenting supply chains, talent pools, and regulatory landscapes.

The Regulatory Reckoning: A global crackdown on deceptive practices like ‘AI washing’ and the full implementation of major legal frameworks like the European Union’s AI Act are turning compliance from a legal hurdle into a source of competitive advantage.

Success in this new era demands not only technical prowess but also a keen understanding of the industry’s ‘economics of infighting’—the weaponization of AI for disinformation, the exploitation of regulatory loopholes, and a pervasive culture of deceptive marketing. This report provides a strategic guide to navigating both the legitimate and illegitimate forces shaping the market.

Part I: State of Play (September 2025) - Dominant Technologies and Market Realities

This section establishes the baseline reality of the AI industry in Q3 2025, assessing what is truly working, what is failing, and why, beyond the initial generative AI hype.

1.1 The New Tech Horizon: From Generation to Action

Commercialization of Agentic AI: The most significant technological shift is the move from prompt-response models to autonomous agents. These are not mere chatbots; they are AI systems capable of understanding user intent, planning multi-step tasks, and executing them across various applications. We will analyze how ‘Agentic AI’ is being productized as ‘virtual colleagues’ in specific domains like marketing. For example, a marketing AI can autonomously design, execute, and optimize entire advertising campaigns based on high-level key performance indicators (KPIs). This signifies a paradigm shift from viewing AI as a tool to seeing it as a labor force multiplier. The technical underpinnings, as seen in the latest research papers, focus on reasoning architectures, memory systems, and multi-agent collaboration.

Beyond Text: Multimodal, Embodied, and World Models: The industry is aggressively expanding beyond language. We will detail the rapid growth of multimodal AI, which integrates and reasons across text, images, voice, and sensor data. This is a technology enabling more sophisticated applications, with the market expected to grow at a CAGR of over 34%. The next logical step, ‘Embodied AI’ or ‘Physical AI,’ is now moving beyond research into early commercialization, with significant venture capital flowing into startups focused on humanoid robots and ‘world models’—AIs that understand and interact with the physical world based on the laws of physics, not just statistical text patterns. This foreshadows the long-term convergence of AI with robotics and industrial automation.

The Economic Imperative: Efficiency and Specialization: The era of ‘bigger is better’ foundation models is ending due to unsustainable costs. The market has pivoted to economic efficiency. This is evidenced by two key trends:

  • The Return of MoE: Mixture-of-Experts (MoE) architectures, which activate only a fraction of a neural network for a given task, have become mainstream. They offer performance comparable to dense models at a much lower inference cost, making them a critical factor for profitability.
  • The LLM/SLM Bifurcation: The market is bifurcating. A few large language models (LLMs) are becoming commoditized, while real value is being created in specialized small language models (SLMs) trained on proprietary, high-quality domain data. This enables higher accuracy and lower operating costs for specific business tasks.

1.2 The Business Battlefield: The Epidemic of Big AI Project Failures

The Sobering Statistics: Despite massive investment, the AI project failure rate is alarmingly high. Estimates suggest that 70-85% of AI projects fail to move beyond the proof-of-concept (PoC) stage into production. This is the single biggest challenge to realizing the economic promise of AI.

Root Cause Analysis - The Data Chasm: The primary cause of failure is not the algorithm but the data. The lack of high-quality, well-governed, and relevant training data is the “biggest barrier.” Bad data leads to poor model performance, biased outcomes, and project abandonment. This is widening the ‘AI divide,’ where companies with mature data infrastructure pull ahead, while others remain stuck in perpetual pilot projects.

Misguided Strategy and Culture: Beyond data, failures are rooted in business fundamentals. Key failure factors include:

  • Misunderstanding the Problem: Focusing on the technology itself rather than solving a clear business problem.
  • Lack of Trust and Cultural Resistance: Without clear governance, education, or communication, employees fear AI and actively resist its implementation.
  • Inadequate Infrastructure: Legacy IT systems cannot support the demands of AI deployment and data management.

Industry Case Studies - Successes and Failures:

  • Manufacturing: We will analyze the challenges of AI adoption in manufacturing, including data fragmentation, skills gaps, and integration with legacy systems. Success stories, like Georgia-Pacific using AI for predictive maintenance to reduce unplanned downtime by 30%, will be contrasted with the broader industry reality of struggling to move beyond pilot projects.
  • Finance: AI is being used successfully in algorithmic trading, fraud detection (Mastercard doubled its detection rates), and personalized customer support (Morgan Stanley). However, the risks of algorithmic bias and the potential for market manipulation remain significant challenges.
  • Healthcare (A Post-Mortem): The high-profile failure of IBM’s Watson for Oncology provides critical lessons. The project failed because of: 1) Data Mismatch: It was trained on hypothetical cases, not real-world patient data. 2) Integration Failure: It could not integrate with hospital workflows and was difficult for doctors to use. 3) Hype: IBM’s marketing created unrealistic expectations that the technology could not meet, destroying credibility.

These market realities reveal two critical implications. First, the AI economy is bifurcating into ‘model providers’ and ‘value creators.’ As foundation LLMs commoditize and inference costs plummet, competing on the pure performance of a general-purpose model is a losing game for most. Simultaneously, the highest value is being captured by those who can solve specific, high-value business problems using proprietary data. Success in manufacturing or finance comes not from building a better LLM, but from applying existing LLMs to unique datasets to solve specific problems like predictive maintenance or fraud detection. This means the market is splitting: a few tech giants will supply the ‘AI electricity’ (the models), but the real profits and competitive advantages will go to the companies that become masters of ‘AI applications’ (the value creation) in their respective industries. This suggests that the most valuable AI companies of the future may not be AI companies at all, but rather the incumbent industry leaders with the best data.

Second, ‘AI Readiness’ is emerging as a new key metric for enterprise valuation, and it is a measure of data maturity and organizational culture, not technology. With over 80% of projects failing, and the primary reasons being data quality, data governance, and cultural resistance, there is overwhelming evidence that an organization’s ability to successfully deploy AI is predicted not by how much it spends on AI software, but by the health of its underlying data infrastructure and its capacity for change management. This implies that investors and analysts will now begin to evaluate companies not just on their P&L, but on an ‘AI Readiness Score’—a composite measure of data governance maturity, infrastructure modernization, and employee AI literacy. Companies with a low score represent a massive investment risk, regardless of their stated AI ambitions, creating a new framework for due diligence and M&A.

Part II: The Economics of Infighting - Deception, Manipulation, and Competitive ‘Foul Play’

This section directly addresses the user’s request for unvarnished insight into the unethical and illicit tactics used to gain a competitive edge in the AI industry.

2.1 “AI Washing”: The Epidemic of Deceptive Marketing

The AI Washing Playbook: We will dissect the common tactics companies use to exaggerate their AI capabilities. This is not just hyperbole; it is a systematic strategy to mislead investors and customers. The types include:

  • Misuse of Technical Terms: Labeling simple automation or statistical analysis as ‘AI’ or ‘machine learning.’
  • Scope Exaggeration: Presenting a limited AI feature as if it powers the entire company.
  • Misattribution of Technology Source: Using a third-party API, like OpenAI’s, but marketing it as a proprietary, in-house AI system.

The Regulatory Reckoning - The FTC and SEC Fight Back: Regulators are no longer passive. We will analyze the marked increase in enforcement actions.

  • SEC Actions: The U.S. Securities and Exchange Commission (SEC) has charged several investment advisers (Delphia, Global Predictions) for making false claims about their AI capabilities, setting a clear precedent that ‘AI washing’ can constitute securities fraud.
  • FTC Actions: The U.S. Federal Trade Commission (FTC) is aggressively pursuing consumer protection lawsuits against companies making deceptive claims. The landmark lawsuit against Air AI in August 2025 serves as a critical case study.

Case Study: FTC vs. Air AI: This case is pivotal because it targets agentic AI and claims about its ability to replace human employees.

  • Deceptive Claims: Air AI marketed its ‘Odin’ tool as a fully autonomous agent that could replace human sales representatives, conduct complex conversations, and generate massive profits for small businesses.
  • The Reality: According to the FTC’s complaint, the tool was “defective,” could not perform basic functions, and required massive manual pre-scripting, making it virtually unusable. They also allegedly bilked millions from customers with a bogus refund guarantee.
  • The Precedent: This case signals that regulators are now scrutinizing the ‘next big things’ in AI and will not tolerate exaggerated claims about automation and productivity, especially when targeted at vulnerable small businesses.

2.2 The Weaponization of AI: Disinformation and Corporate Threats

State-Led Influence Operations: AI has become a tool of statecraft. Drawing on OpenAI’s threat intelligence reports, we will detail how state actors like Russia, China, and Iran are using generative AI to:

  • Generate multilingual propaganda content at scale with higher credibility.
  • Create fake social media profiles and comments to create the illusion of grassroots support or opposition.
  • Target specific geopolitical events, like elections or conflicts, to manipulate public opinion.

Corporate Espionage and Market Manipulation: The same techniques used by nations are being adopted for corporate warfare. While documented cases are rare (due to their covert nature), the capability exists for companies to deploy AI to:

  • Spread negative rumors or fake news about a competitor to damage their stock price.
  • Generate orchestrated social media campaigns to attack a competitor’s products or promote their own.
  • Use AI for sophisticated social engineering to gain access to a competitor’s trade secrets.

2.3 The Gray Zone: Exploiting Loopholes and Ethical Boundaries

Data Heists: The foundation of AI dominance is data, and the methods for acquiring it are often ethically and legally dubious. This includes the mass scraping of copyrighted text and images from the internet without permission, which is at the heart of numerous high-stakes lawsuits against major AI labs.

Algorithmic Bias as a Business Model: While often discussed as an accidental flaw, bias can be an intentional feature. For example, a credit-scoring AI could be subtly tuned to favor or disfavor certain demographics, not for overtly discriminatory reasons, but because it optimizes for profit based on historical data correlations. This is a form of ‘foul play’ that is extremely difficult to prove or regulate.

The Content Apocalypse - The “Gizmodo/io9” Precedent: The incident where G/O Media used AI to publish low-quality, error-ridden articles on its sites like io9 is not just a mistake; it is a business model test. The strategy is to generate massive volumes of content at near-zero cost to capture search engine traffic and ad revenue, with little regard for quality or accuracy. This is a cynical play that prioritizes volume over value, devaluing human journalism and polluting the information ecosystem.

These unfair practices are evolving into structural features of the AI industry, imposing a ‘trust tax’ on the entire ecosystem. The multiple instances of misconduct, such as AI washing, state-sponsored manipulation, and the generation of low-quality content, are not isolated incidents. These actions have a cumulative effect. Consumers and businesses are becoming increasingly skeptical of all AI claims, which forces legitimate companies to spend more time and money proving their claims are true, effectively paying a ‘trust tax’ imposed by malicious actors. Ultimately, this means that ‘trustworthiness’ itself becomes a key product feature and competitive differentiator. Companies that can verifiably demonstrate that their AI is safe, reliable, and honestly marketed will command a premium and attract more risk-averse enterprise customers. This elevates AI ethics and governance from a corporate social responsibility (CSR) function to a core part of product strategy.

Consequently, regulatory enforcement becomes the single most important catalyst shaping the competitive landscape over the next three years. For years, the AI industry operated in a regulatory vacuum, dominated by a ‘move fast and break things’ ethos. The actions by the FTC and SEC signal a fundamental shift. There are now clear financial and legal consequences for deception, and the EU AI Act imposes even more structured requirements. This means the winners of the next phase may not be those with the best technology, but those who can best navigate this complex legal maze. A startup with a groundbreaking model could be wiped out by a single FTC injunction, while a slower-moving but fully compliant competitor could thrive. This makes the legal and compliance teams at AI companies just as critical as their R&D teams.

Part III: The Next Three Years (2026-2028) - Navigating Geopolitics, Regulation, and Disruption

This section provides a forward-looking forecast focusing on the macro forces that will define strategic decision-making.

3.1 The Great Decoupling: The US-China AI Cold War

Asymmetric Competition: The US and China are not fighting the same war. We will analyze their differing strategies:

  • United States: Dominant in foundation model R&D, private venture capital investment, and attracting global talent. Its strategy is market-driven, led by a few powerful tech giants and startups. The focus is on pushing the technological frontier (e.g., AGI).
  • China: Leads in the quantity (though quality is debated) of AI patents and in government-led, top-down implementation across industry and state infrastructure. Its strategy is state-driven, focused on practical application, social governance, and achieving technological self-reliance (e.g., AI chips) to counter US sanctions.

Impact on Global Business: This rivalry is not abstract; it has direct consequences:

  • Supply Chain Fragmentation: US export controls on advanced AI chips to China are forcing a bifurcation of the hardware supply chain, increasing costs and complexity for global firms.
  • Data Balkanization: Nations are increasingly aligning with either the US or Chinese sphere of influence, imposing restrictions on cross-border data flows, which are essential for training AI models.
  • The War for Talent: Both nations are fiercely competing to secure the best AI researchers, with the US currently holding an edge, but China is rapidly growing its domestic talent pool through massive government investment in education.

3.2 The Global Regulatory Maze: Compliance as a Competitive Moat

A Fragmented World: There is no single global standard for AI regulation. Companies must navigate a patchwork of competing legal frameworks. We will provide a strategic comparison of the three most influential models:

  • The EU’s ‘AI Act’: A comprehensive, risk-based framework that classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes strict obligations on ‘high-risk’ applications. Key provisions for general-purpose AI models are coming into force in late 2025, but implementation is complex and facing delays. It is the most prescriptive and burdensome framework globally.
  • The US ‘Market Enforcement’ Model: The US lacks a single, comprehensive federal law. Instead, it relies on existing agencies like the FTC and SEC to apply existing laws (against deception, fraud, discrimination) to AI, as seen in the ‘AI washing’ cases. This approach is more flexible but less predictable. A potential Trump administration could favor even greater deregulation.
  • South Korea’s ‘Balanced’ Model: The ‘AI Basic Act’ (effective Jan 2026) attempts to strike a balance between promoting innovation and ensuring safety and trust. It is less restrictive than the EU’s, focusing primarily on ‘high-risk AI’ systems and establishing national governance structures while providing significant support for the AI industry.

Table: Comparing Global AI Regulatory Landscapes (2025)

Region/FrameworkEU (AI Act)US (Sectoral Enforcement)South Korea (AI Basic Act)
Core PhilosophyRisk-Based, PrecautionaryMarket-Driven, Ex-Post EnforcementPromotion and Responsibility in Harmony
Key LegislationAI Act (Regulation (EU) 2024/1689)Existing laws (FTC Act, Securities Act, etc.)AI Basic Act
Primary Scope’High-Risk’ Systems Across SectorsDeceptive Marketing, Fraud, Bias’High-Risk AI’ and Industry Promotion
Key Compliance Deadlines (as of Sep 2025)GPAI Model Rules (Aug 2025), Full Application (Aug 2026)Ongoing Enforcement ActionsFull Law Enforcement (Jan 2026)
Enforcement BodyNational Competent Authorities, AI OfficeFTC, SEC, Sectoral RegulatorsMinistry of Science and ICT, National AI Committee
Strategic Implication for FirmsHigh compliance burden, barrier to entry. But ‘trust’ certification can be a brand advantage.High litigation risk, low predictability. Favorable for speed of innovation.Balanced environment. Favorable for R&D and scale-up, potential model for other nations.

The divergence in these regulatory environments is leading to more than just a decoupling; it’s a ‘tripolarizing’ phenomenon around distinct regulatory philosophies. The EU’s strict precautionary model, the US’s laissez-faire enforcement model, and South Korea’s balanced ‘promote and regulate’ model represent not just variations, but fundamentally different philosophies about the role of the state in managing technology. This means that a company’s global strategy must be tailored to each bloc, creating opportunities for a new ‘foul play’ strategy: ‘regulatory arbitrage.’ For example, a company could develop and train a high-risk AI system in the less-regulated US to gain market traction and data, and only then tackle the expensive compliance challenges required for EU market entry. South Korea could become a favored ‘sandbox’ for companies seeking a stable regulatory environment that is less punitive than the EU’s but more structured than the US’s.

3.3 The Next Wave: Shifts in Technology and Labor

The Path to Commercialization: Towards 2028, the core R&D focus will be on technologies that enable AI to interact more deeply with the real world. This includes the development of robust world models and physical AI that can power the next generation of autonomous vehicles, robotics, and scientific discovery platforms.

The AI-Augmented Workforce: The conversation is shifting from ‘AI replaces jobs’ to ‘AI augments skills.’ The most valuable employees will be those who can effectively collaborate with AI agents. This places a premium on the soft skills that are difficult to automate: critical thinking, creativity, and emotional intelligence.

The Rise of the AI Tutor: Generative AI will revolutionize corporate education and training. Personalized AI tutors can provide one-on-one coaching at scale, dramatically accelerating upskilling and reskilling efforts to close the AI talent gap. This is not just a benefit for employees; it is a critical tool for companies to build an AI-ready workforce.

The most severe ‘foul play’ in the next three years will be the weaponization of AI targeting the AI industry itself. We have witnessed AI being used to spread disinformation about nations and companies, and the emergence of complex regulations [like the EU AI Act]. The next logical step is to combine the two. A sophisticated actor (state or corporate) could use AI to generate deepfake evidence or a surge of bot-driven complaints suggesting a competitor’s AI product is non-compliant with, for example, the high-risk provisions of the EU AI Act. This could trigger a costly and time-consuming regulatory investigation that freezes a competitor’s ability to operate in a key market, even if the claims are ultimately baseless. This is the weaponization of the regulatory framework itself, using AI-generated deception as the ammunition. It is a tertiary threat that most companies are unprepared for.

Conclusion: Strategic Imperatives for the Agentic Age

  • Embrace Pragmatism Over Hype: The key to success is no longer chasing the latest model, but a relentless focus on solving real business problems with a clear ROI. Invest in data infrastructure and governance as your primary competitive moat.
  • Proactively Navigate the Regulatory Maze: Treat compliance not as a cost center, but as a strategic function. Design for trust and transparency from the outset to win in the most heavily regulated markets.
  • Prepare for a New Class of Threats: The competitive landscape now includes state-sponsored disinformation and the weaponization of regulation. Build resilience by investing in cybersecurity, threat intelligence, and a robust crisis communication plan.
  • Foster the Human-AI Partnership: The future of the workforce lies not in replacing humans, but in augmenting their skills. Invest heavily in AI literacy and cultivate the soft skills that will differentiate talent in the agentic age. The ultimate competitive advantage will belong to the organizations that master the collaboration between human ingenuity and artificial intelligence.

Sources