AI Nausea: When “All-In” Becomes All-Cost (and All-Risk)

Provocative title, serious problem. If you’re feeling a form of “AI nausea” lately, you’re not alone. In boardrooms, earnings calls, vendor pitches, and internal town halls, AI and GenAI have become the default answer—often before we’ve even framed the question. That’s not innovation. That’s reflex.

This piece is intentionally sharper than my usual business analyses: not because AI isn’t transformative (it is), but because the current corporate discourse is drifting into a dangerous mix of magical thinking, budget amnesia, and risk blindness.

Let’s do three things:

  1. Challenge the “all-in” AI strategy that ignores energy, infrastructure constraints, and full economic cost.
  2. Call out GenAI as the “universal solution” myth—and re-center proven disciplines like process reengineering and RPA where they still win.
  3. Map the corporate risks and unknowns of scaled AI usage, and propose a governance-and-delivery playbook that actually holds up in production.

Table of Contents


1) The Anatomy of “AI Nausea”

AI nausea isn’t skepticism about technology. It’s a reaction to cognitive overload and strategic dilution:

  • Everything becomes an “AI initiative,” so nothing is clearly prioritized.
  • Executives demand “AI everywhere” while teams lack clean data, stable processes, and change capacity.
  • Vendors rebrand old capabilities with “GenAI” stickers and sell urgency instead of outcomes.
  • Governance lags adoption—until an incident forces a painful reset.

AI doesn’t fail because it’s not powerful. It fails because organizations deploy it like a trend, not like a production capability with constraints, costs, and risk.

The antidote is not “less AI.” It’s better decisioning: where AI is used, why, by whom, under what controls, and with what measurable value.


2) The “All-In” Trap: Energy, Cost, and the Economics You Can’t Ignore

The “all-in” messaging is seductive: invest aggressively, modernize everything, out-innovate competitors. But most “all-in” roadmaps ignore three inconvenient realities:

2.1 Energy is not an abstract externality

AI runs on compute. Compute runs on electricity. Electricity runs on infrastructure. And infrastructure has limits—grid capacity, permitting cycles, transformer availability, cooling, water constraints, and local community acceptance.

In many markets, the constraint is no longer “do we have the right model?” It’s “can we power and cool the workload reliably, affordably, and sustainably?” That changes the economics, the timelines, and the reputational risk of your AI strategy.

2.2 “Cost per demo” is not “cost per enterprise outcome”

GenAI pilots are cheap relative to scaled operations. Enterprises routinely underestimate:

  • Inference cost at scale (especially when usage becomes habitual).
  • Data plumbing: integration, lineage, permissions, retention, and observability.
  • Model governance: evaluation, monitoring, drift detection, incident handling.
  • Security hardening: prompt injection defenses, access controls, red teaming, logging.
  • Change management: adoption is not automatic; it must be designed.

Many organizations are discovering a new category of technical debt: AI debt—a growing burden of poorly governed models, shadow deployments, duplicated tools, and opaque vendors.

2.3 “All-in” often means “all-over-the-place”

When AI becomes a mandate rather than a strategy, two things happen:

  • Teams chase use cases that are easy to demo but hard to operationalize.
  • Leadership gets a portfolio of projects, not a portfolio of outcomes.

3) Practical Recommendations: Treat AI Like an Industrial Capability

Here is the pragmatic framing: AI is one tool in the value-creation toolbox. Powerful, yes—but not exempt from economics.

3.1 Build an “AI value thesis” before you build an AI factory

Define value in three buckets—and force every initiative to live in one:

  • Revenue growth: conversion, personalization, pricing, product innovation.
  • Cost productivity: automation, deflection, cycle-time reduction, quality improvements.
  • Risk reduction: fraud detection, compliance controls, safety monitoring.

Then require each use case to specify: baseline, target KPI, owner, measurement method, and the operational changes required to realize value.

3.2 Introduce a “compute budget” the same way you have a financial budget

Most companies would never approve “unlimited spending” for cloud storage or travel. Yet GenAI often gets deployed without a tight discipline on usage patterns and unit economics.

Do this instead:

  • Assign cost per transaction targets (and track them).
  • Use model tiering: smaller/cheaper models by default; premium models only when needed.
  • Implement caching, summarization, and retrieval patterns to reduce repeated inference.
  • Set rate limits and guardrails for high-volume workloads.

3.3 Separate “innovation sandboxes” from “production platforms”

Pilots belong in a sandbox. Enterprise rollout belongs in a governed platform with:

  • Approved models and vendors
  • Data access controls and policy enforcement
  • Logging and auditability
  • Evaluation harnesses and ongoing monitoring
  • Clear incident response procedures

3.4 If your strategy ignores energy, it isn’t a strategy

At minimum, leaders should ask:

  • What’s our forecasted AI electricity footprint and peak demand profile?
  • Which workloads must run in real time, and which can be scheduled?
  • What’s our plan for location, resiliency, and sustainability trade-offs?
  • Are we choosing architectures that reduce compute intensity?

4) GenAI Is Not a Universal Hammer

GenAI excels at language, synthesis, and pattern completion. That does not mean it is the optimal solution to every business problem.

The current market behavior is a classic failure mode: once a tool becomes fashionable, organizations start redefining problems to fit the tool. That’s backwards.

There are at least four categories of problems where GenAI is routinely over-applied:

  • Broken processes (automation won’t fix a bad process design).
  • Data quality issues (GenAI can mask them, not solve them).
  • Deterministic rules (where simple logic or RPA is cheaper and more reliable).
  • Regulated decisions (where explainability, auditability, and bias constraints dominate).

If your process is chaos, GenAI will generate faster chaos—just in nicer sentences.


5) Where GenAI Truly Wins (and Where It Loses)

5.1 High-fit GenAI patterns

  • Knowledge work acceleration: summarizing long documents, drafting variants, extracting structured fields from unstructured text (with validation).
  • Customer support augmentation: agent assist, suggested replies, faster retrieval of policies and procedures.
  • Software productivity: scaffolding, refactoring assistance, test generation—when governed and reviewed.
  • Content operations: marketing drafts, localization, internal communications—within brand and legal constraints.
  • Search + retrieval: better discovery across enterprise knowledge bases (RAG) if content is curated and access-controlled.

5.2 Low-fit GenAI patterns

  • High-volume transactional automation with stable rules (classic RPA/workflow engines often win).
  • Financial close and controls where traceability and determinism matter (GenAI can assist, but shouldn’t “decide”).
  • Safety-critical decisions where errors have outsized impact.
  • Processes with low standardization and no documented baseline (you need process work first).

6) When Process Reengineering and RPA Beat GenAI (with Examples)

Before you apply GenAI, ask a blunt question: Is this a process problem, a workflow problem, or a language problem?

Example A: Invoice processing in shared services

Common GenAI pitch: “Let a model read invoices and route exceptions.”

Often better approach:

  • Process reengineering to standardize invoice submission channels and required fields
  • Supplier portal improvements
  • Rules-based validation + OCR where needed
  • RPA for deterministic steps

Where GenAI fits: exception summarization, email drafting to suppliers, extracting ambiguous fields—but only after the process is standardized.

Example B: HR case management

Common GenAI pitch: “A chatbot for all HR questions.”

Often better approach:

  • Knowledge base cleanup (single source of truth)
  • Ticket categorization standards and routing rules
  • Self-service redesign for top 20 intents
  • RPA/workflows for repeatable requests (letters, address changes, benefits confirmations)

Where GenAI fits: agent assist, policy summarization, guided Q&A—plus careful governance for sensitive data.

Example C: Sales operations and CRM hygiene

Common GenAI pitch: “GenAI will fix forecast accuracy.”

Often better approach:

  • Pipeline stage definitions and exit criteria
  • Required fields and validation rules
  • Deal review cadence and accountability

Where GenAI fits: call summarization, next-best-action suggestions, proposal drafting—once the operating discipline exists.


7) Corporate Risks: The Unsexy List Leadership Must Own

Scaled AI use introduces a layered risk stack. Treat it like any other enterprise risk domain—cyber, financial controls, privacy, third-party, and reputational risk—because that’s what it is.

7.1 Security risks

  • Prompt injection and malicious instructions embedded in documents or web content
  • Data leakage via prompts, outputs, logs, or vendor retention
  • Model supply-chain risk: third-party dependencies, plugins, and tool integrations

7.2 Privacy and IP risks

  • Accidental exposure of sensitive data (employees, customers, contracts, health, financials)
  • Unclear IP ownership or training data provenance
  • Inappropriate use of copyrighted or licensed material

7.3 Compliance and regulatory risks

  • Sector-specific compliance constraints (financial services, healthcare, labor, consumer protection)
  • Emerging AI regulations that impose obligations on providers and deployers
  • Auditability requirements: “show your work” for decisions affecting people

7.4 Operational and model risks

  • Hallucinations (confident errors)
  • Drift as data and context change
  • Automation bias: humans over-trust outputs
  • Fragile integrations between models, tools, and enterprise systems

7.5 Reputational risks

  • Biased or harmful outputs
  • Inappropriate tone or brand voice
  • Customer trust erosion after a single public incident

8) The AI Operating Model: From Hype to Repeatable Delivery

If you want AI value without AI chaos, you need an operating model. Not a slide. A real one.

8.1 Create an AI Portfolio Board (not an AI hype committee)

Its job is to approve and govern use cases based on:

  • Value thesis and measurable KPIs
  • Risk classification and required controls
  • Data readiness and process maturity
  • Unit economics and compute budget
  • Change management and adoption plan

8.2 Standardize delivery patterns

Most enterprises should build repeatable blueprints:

  • RAG patterns for internal knowledge with access control
  • Agent assist for customer/employee support with human-in-the-loop
  • Document intelligence + validation workflows
  • Automation orchestration (workflow engines + RPA + APIs) where GenAI is only one component

8.3 Implement “trust controls” as first-class features

  • Model evaluation gates (accuracy, toxicity, bias, security tests)
  • Continuous monitoring and alerting
  • Human override and escalation paths
  • Audit logs and retention policies

8.4 Treat adoption as a change program

AI changes roles, behaviors, and accountability. Leaders should fund:

  • Training that targets specific workflows
  • Usage playbooks and guardrails
  • Measurement of adoption and outcomes
  • Feedback loops to improve prompts, retrieval, and UX

9) A Decision Scorecard You Can Use Next Week

Use this simple scorecard to decide whether GenAI is the right tool:

QuestionIf “Yes”If “No”
Is the core problem language-heavy (summarize, draft, classify, search)?GenAI may fitConsider process/RPA/rules first
Is the process stable and standardized?Automation can scaleReengineer the process first
Is the decision regulated or safety-critical?Use assistive patterns + controlsMore freedom, still monitor
Can you measure value with a hard KPI and baseline?ProceedDon’t fund it yet
Do unit economics work at scale (cost per transaction)?Scale with governanceRedesign architecture or stop

10) Closing: Less Religion, More Engineering

AI is real. The value is real. But so are the constraints: energy, cost, infrastructure, governance, risk, and organizational change capacity.

If you want to cure “AI nausea,” stop treating GenAI as a universal solvent. Treat it as a powerful tool in a broader operating system of value creation: process discipline, data quality, workflow design, automation engineering, and governance maturity.

Put differently: the companies that win won’t be those who shout “AI-first” the loudest. They’ll be the ones who build AI-smart—with economics, controls, and outcomes engineered into the system.

When “Success Fees” Backfire: The Capgemini–ICE Controversy and What It Teaches Consulting Leaders

Success fees (or incentive-based fees) are increasingly common in consulting contracts: part of the firm’s remuneration depends on outcomes. In theory, it aligns interests and de-risks the engagement for the client. In practice, if the metric is badly designed—or the client context is politically, legally, or ethically sensitive—this pricing structure can become a reputational accelerant.

That tension has been thrust into the spotlight by the controversy around Capgemini’s work with U.S. Immigration and Customs Enforcement (ICE), as reported by Le Monde. Beyond the noise and the outrage, there is a sober lesson here for every consulting leader: variable fees magnify governance requirements. Not just in sales. Not just in legal review. At the highest level of the firm—especially when the work touches sensitive missions, sensitive data, or outcomes that can be construed as coercive.

Before going further, a personal note: I used to be part of Capgemini Consulting (now Capgemini Invent, the group’s strategy consulting division). I have worked with many exceptional people there—client-first professionals with strong integrity and real pride in craft. My default assumption is not “bad actors,” but complex systems: decentralized P&Ls, fast-moving sales cycles, and contract structures that can drift into dangerous territory when incentives are poorly framed and escalation is ambiguous.


The mechanics: what “success fees” really are (and why they’re attractive)

In consulting, “success fee” is an umbrella term that can describe several pricing mechanisms:

  • Outcome-based fees: part of the fee depends on achieving a defined business result (e.g., cost savings, revenue uplift, SLA attainment).
  • Incentive fees / performance bonuses: additional compensation if delivery performance exceeds targets (often tied to operational KPIs).
  • Risk-sharing / gainsharing: the firm shares in realized value (sometimes audited), often with a “base fee + variable component” model.
  • Contingency-style arrangements: payment occurs only if a specific event happens (rare in classic management consulting, but present in certain niches).

Clients like these models for predictable reasons:

  • They transfer risk: “If you don’t deliver, we pay less.”
  • They signal confidence: the firm is willing to put skin in the game.
  • They simplify procurement narratives: “We only pay for results.”
  • They can accelerate decision-making: variable pricing can unlock budgets when ROI is uncertain.

Firms accept them because they can (a) win competitive bids, (b) monetize exceptional performance, and (c) strengthen long-term accounts. In a market where buyers push for value and speed, variable pricing is often framed as modern, fair, and commercially mature.

But here is the problem: success fees change behavior. They don’t just pay for outcomes; they shape how teams interpret “success,” how they prioritize work, and how they balance second-order consequences.


The core risk: incentives create “perverse optimization”

Any metric used for variable compensation becomes a target. And when it becomes a target, it stops being a good measure (Goodhart’s Law in action).

In commercial contexts, the damage is usually operational: teams optimize for the KPI rather than the business. In sensitive contexts, the damage can be broader:

  • Ethical drift: “If we hit this target, we get paid more” can quietly reframe judgment calls.
  • Externalities ignored: the metric may not capture collateral impacts (e.g., privacy harms, community trust erosion).
  • Weak accountability: teams delivering a narrow scope may not see—or be incentivized to consider—the societal effects.
  • Reputational amplification: once reported publicly, “bonus for X” can be interpreted as “profit from harm,” regardless of nuance.

This is why success fees require stronger governance than time-and-materials or fixed price: the contract is not only a commercial instrument; it becomes a behavioral design mechanism.


The Capgemini–ICE controversy as a governance stress test

Based on the reporting referenced above, the controversy is not just “working with ICE” (a politically charged client in itself). It is also the structure: the idea that compensation can be adjusted based on “success rates.”

In a purely operational lens, “incentive fee for performance” is not exotic. Many large organizations, including public bodies, write performance clauses and bonuses into contracts to drive service levels. The controversy arises because the human context changes the meaning of the metric: what looks like a neutral operational KPI can be interpreted as enabling enforcement outcomes against individuals.

Key lesson: In sensitive domains, incentive design is inseparable from moral narrative.

Leaders may see “a standard performance-based contract.” Employees, unions, media, and the public may see “paid more for more removals.” And once that framing sets in, you are no longer debating legal compliance—you are in a reputational and values crisis.


Why this can happen to any consulting firm

It would be comforting to treat this as a one-off “Capgemini story.” It is not. The structural conditions exist across the industry:

  • Decentralized growth models: subsidiaries, sector units, and local leadership with P&L accountability are designed to move fast.
  • Procurement language reuse: performance clauses and incentive mechanisms are often templated and reused.
  • Sales incentives: growth targets can create pressure to “make the deal work” and underweight reputational risk.
  • Ambiguous escalation: teams may not know when an engagement needs executive or board-level review.
  • “Not our policy domain” mindset: delivery teams focus on scope; public narrative focuses on outcomes.

And yes—every major consulting firm works with sensitive clients (in different ways and at different levels). The question is not “do we ever touch sensitive domains?” It is: how do we govern them, and how do we design incentives inside them?


A practical framework: how to govern success-fee contracts in sensitive contexts

If you lead a consulting business, here is a workable approach that does not rely on moral grandstanding or naive “we’ll never do X” statements. It relies on process, thresholds, and transparency.

1) Classify “sensitivity” explicitly (don’t pretend it’s obvious)

Create a sensitivity taxonomy that flags engagements involving one or more of the following:

  • Coercive state powers (detention, deportation, policing, surveillance, sanctions).
  • Highly sensitive personal data (immigration status, health data, biometric data, minors).
  • Life-and-liberty outcomes (decisions affecting freedom, safety, or basic rights).
  • High political salience (topics likely to trigger public controversy).
  • Vendor ecosystems with reputational baggage (partners with significant controversy history).

If a deal meets the threshold, it triggers enhanced review automatically.

2) Elevate approval: “highest-level review” must be real, not symbolic

The minimum for flagged engagements:

  • Independent legal review (not only contract compliance, but exposure assessment).
  • Ethics / values review with documented rationale (what we do, what we won’t do, and why).
  • Executive sign-off at a level that matches reputational risk (often group-level, not business-unit).
  • Board visibility when the potential public impact is material.

A review process that can be bypassed under commercial pressure is not governance—it is theater.

3) Redesign incentive clauses to avoid “harm-linked pay” narratives

In sensitive contexts, assume the variable fee will be summarized in one sentence by a journalist. If that sentence sounds like “paid more when more people are caught,” you have a problem—even if technically inaccurate.

Better patterns include:

  • Quality and compliance incentives (data accuracy, audit pass rates, error reduction).
  • Safeguard-linked incentives (privacy-by-design milestones, oversight controls, documented approvals).
  • Service reliability incentives (availability, response time) rather than “impact on individuals.”
  • Caps and neutral language that avoid tying remuneration to coercive outcomes.

Put bluntly: align incentives with process integrity more than enforcement yield.

4) Build an “exit ramp” clause you can actually use

Sensitive engagements should include contractual provisions that allow termination or scope adjustment when:

  • new facts emerge about downstream use,
  • public trust materially deteriorates,
  • the client’s operating model changes in ways that alter ethical risk.

Without an exit ramp, leadership can end up trapped between “we must honor the contract” and “we can’t defend this publicly.”

5) Treat internal stakeholders as part of the risk surface

Employee backlash is not a PR anomaly; it is a governance signal. When teams learn about a sensitive contract through the press, trust collapses quickly.

For flagged deals, firms should pre-plan:

  • internal communication explaining scope, constraints, safeguards, and decision rationale,
  • channels for concerns and escalation without retaliation,
  • clear boundaries for what employees will and won’t be asked to do.

Where I land: integrity is common; governance must catch up

I do not believe most people inside Capgemini—or any large consulting organization—wake up aiming to do unethical work. The industry is full of professionals who care deeply about clients, teams, and societal impact.

But that is exactly why governance matters: integrity at the individual level does not prevent system-level failure. When contract incentives, client sensitivity, and escalation pathways are misaligned, even good people can end up defending the indefensible—or learning about it after the fact.

Success fees are not inherently wrong. In many commercial transformations, they can be a powerful alignment tool. The lesson is narrower and more practical:

  • Success fees should be treated as “behavior design.”
  • Sensitive clients should trigger “highest-level review” automatically.
  • Incentives must be defensible not only legally, but narratively.

If you lead a consulting practice, ask yourself one question: “If this clause were read out loud on the evening news, would we still be comfortable?” If the answer is “it depends,” the contract needs rework—before signature, not after backlash.