25 March 2026 · Pedro Aldea

70% of AI projects fail. Not because of the technology.

Only 39% of Spanish companies see positive ROI from AI. The problem isn't the algorithm. It's the process it's applied to.

Only 39% of Spanish companies that invest in artificial intelligence achieve a positive return. The global average is 47%. Spain is eight points below.

But the most telling number isn’t that one. It’s this: 70% of AI projects that fail don’t fail because of the technology. They fail because nobody asked the right operational questions first.

The mistake happens before the code

When a company decides to “implement AI,” the first instinct is to find the tool. A chatbot, a computer vision system, a demand forecasting model. The market is full of excellent, increasingly affordable options.

But the tool is the answer to a question nobody has asked: what process are we trying to improve?

Not in vague terms. Specifically. How many steps does it have. How long does it take. Where does it break. How much does each failure cost. Who depends on it.

Without that question, AI gets applied to a vacuum. And in a vacuum, even the best technology produces mediocre results.

Eliminate before you automate

In one of our industrial data projects (508 tables, 3 million rows, 327 brands with 1,400 variations), 70% of the work was eliminating and organising before automating anything.

Removing duplicate tables. Unifying criteria that four departments had defined differently. Cleaning data that hadn’t been touched in years.

That’s not AI work. That’s operations work. But it’s the work that determines whether AI succeeds or fails.

If you automate a disordered process, you get automated disorder. Faster, yes. But just as broken.

Why Spain underperforms the global average

The pattern we see on the ground: Spain has prioritised adopting tools over understanding the problem.

The subsidy ecosystem has been a powerful driver of digitalisation. Kit Digital, Ticket Innova, FEDER funds, now another 40 million euros from the government for AI use cases. This has put tools in the hands of thousands of companies.

But subsidising the tool is not the same as subsidising the diagnosis that should come first. Without diagnosis, the tool sits underused.

60% of Spanish companies already use some form of AI technology. But only 12% have it integrated into their processes. The gap between “using” and “integrating” is exactly the distance between having a tool and having a process that works with that tool.

The sequence that produces results

After multiple projects in industrial environments, the sequence that works is always the same:

  1. Eliminate what shouldn’t exist. Redundant steps, approvals that add no value, reports nobody reads.
  2. Simplify what remains. Unify criteria, reduce variations, create standards.
  3. Automate what’s repetitive. Clear rules, clean data, predictable processes.
  4. Augment with AI what requires judgement at scale. Classification, prediction, anomaly detection.

AI is step 4. Not step 1.

Companies that follow this sequence are in the 39% that recover their investment. Those that jump straight to step 4 are in the other 61%.

The real test

The proof that an AI project has worked isn’t that the model achieves good accuracy in a lab. It’s that the client’s team uses it in production, with real data, every day, and can adjust it when something changes, without needing anyone external.

If after 6 months you’re still calling the vendor for every adjustment, the project hasn’t succeeded. It’s created a new dependency.

Success is measured the same way as any operations project: before and after. With numbers. In production.


Data cited: Javadex (AI ROI 2026), KeepCoding (consulting vs internal capability), Ditrendia (AI adoption Spain 2025), Control Group (business trends 2026).