Wednesday, 25 February 2026

Guest Blog: Shaky Technology, Steady Momentum: How Generative AI Innovation Survives Setbacks

by Choroszewicz and Rannisto

Choroszewicz, M., & Rannisto, A. (2026). AI innovation at the boundaries: Justifying a generative AI decision support tool. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261424159 (Original work published 2026)

Generative AI is entering the public sector under intense conditions: political pressure, organizational enthusiasm, and a widely shared conviction that innovation must move fast. Public organizations across Europe are experimenting at pace. These trials are often wrapped in familiar promises – greater efficiency and productivity, cost savings, better services for citizens, and relief from the friction of bureaucratic routines.

Our paper examines a generative AI decision support tool in Finnish public administration and shows how AI projects can keep continue even when their promised outcomes remain unfulfilled. We found that the project was sustained through boundary-spanning practices and a powerful “package” of justification frames that made the tool appear irresistible across organizational and professional boundaries.

How a Technology Becomes Irresistible

We identified nine recurring justifications that together formed a protective structure around the tool’s development. This structure did two things at once: it kept the innovation moving from one experiment to the next, and it buffered the project against criticism when doubts about the tool’s reliability began to surface. We observed how these frames emerged and circulated through the project events, artefacts, and representations, making continued development appear well justified. Some of these frames drew their justificatory force from the imagined tool itself, while others drew on the conditions and practices that emerged around it.

The tool-oriented frames leaned on familiar AI promises – efficiency and cost savings – but also on claims about employee well-being and fairness for citizens, with desirability often functioning as a proxy for value. Around the tool, a set of process- and ideology-oriented frames cast speed, bold initiative, and experimentation as virtues, normalized setbacks and sustained innovation momentum.

Why It Matters Where Justifications Land

Justifications do not carry the same weight everywhere: what counts as a “good reason” depends on the institutional and cultural landscape in which it is received. In our paper, the Nordic welfare state context seemed to make certain appeals especially resonant, because they aligned organizational performance with worker protection and civic ideals.

A specific justificatory “package” stood out as particularly powerful, combining elements of (i) efficiency: faster, more consistent operation and decisions; (ii) employee well-being: reduced cognitive load for claims specialists; and (iii) civic fairness: fairer, more transparent, and more equitable outcomes for citizens.

Together, they formed a compact public-value package that travelled across groups and stakeholders, making continued development appear justified even though the tool’s performance remained limited.

Boundary Work: Alliances, Divides, and Shifting Responsibilities

Because the justification frames did not operate in isolation, their enactment and force also depended on boundary work, the practices through which organizational and professional lines were crossed, reinforced, or temporarily rearranged as the project unfolded. In other words, what could be justified, to whom, and on what terms was shaped by how relationships, roles, and resources were arranged around the tool.

Collaborative and configurational boundary work took the form of alliances with managers and consultants, alongside practical reconfigurations of existing boundaries to make experimentation possible. New meeting formats, shared artefacts, experiment arrangements, and reporting practices helped gather resources and attention around the tool and sustain its development.

Competitive boundary work surfaced most clearly at the interface between innovation and frontline work, a divide between the flexible world of innovation and the controlled routines, limited risk tolerance and evaluative standards of frontline work. At this interface, the central questions emerged: whose judgments carried weight in defining what the tool was, what it should do, and how its performance should be interpreted?

As the tool’s promises proved difficult to realize, responsibility for “making it work” increasingly drifted from the tool’s outputs toward organizational conditions, expectations, and patterns of use: user interaction, training, prompting practices, document formats, workflow changes, and “AI readiness” more broadly. This shift did not remove the tool’s technical limitations, but it changed where they were made visible – and where critique tended to land – recasting innovation success less as technical robustness and more as organizational and user transformation.

Failure as Business as Usual

The tool’s ongoing inability to deliver on its core promises did not bring the project to a halt. Instead, failure was often folded into the rhythm of innovation praxis as something to be anticipated, worked around, and learned from. Within this framing, failure became normalized as a default condition of a progressive innovation, a spur to further activity. Continuing uncertainty was taken as part of the work itself, and so further investments of time, attention, and resources appeared not only reasonable but necessary.

At the same time, the tool’s technical opacity made failure difficult to locate and therefore difficult to settle. When it is hard to say why a system fails, it is also hard to know what would count as a decisive reason to stop. Meanwhile, the surrounding hype around generative AI, combined with the rapid pace of language model development, made it plausible to expect that technical improvements would arrive “from the outside” as models matured. In our case, that expectation proved wrong several times. 

Why Some AI Projects Become Hard to Stop

Our paper shows that sustaining AI innovation is not merely a technical matter. It relies on ongoing boundary-crossing practices and on powerful justifications that resonate with shared values and organizational aspirations. Crucially, what matters is often not any single justification, but how certain justifications cluster into persuasive packages that fit the context in which they circulate.

Our papers also shows that the tool’s development persisted not because it met its promises, but because the surrounding justificatory dynamics made continuation seem reasonable and even difficult to interrupt. Such dynamics can generate momentum, mobilize attention, and direct resources. But they can also narrow the space for critical reflection – locking organizations into particular innovation trajectories and obscuring consideration of alternative pathways, including the option of pausing.