When Human Agency Matters in the In-Between World

In a world shaped by artificial intelligence and profound social transformation, leadership can no longer rely on control or certainty. It requires the capacity to navigate between two realities: the space where yesterday’s models have lost their explanatory power and tomorrow’s possibilities are still taking form. We are not merely experiencing disruption. We are inhabiting a sustained liminal condition, a structural in-between that is technological, psychological, cultural and geopolitical at the same time.

Cognitive Ecosystems and the Erosion of Agency

AI does not simply accelerate existing processes. It reorganises how meaning and agency are distributed within what I describe as cognitive ecosystems. These ecosystems increasingly mediate how decisions are made, how knowledge circulates, and how value is created. As a result, leadership today cannot be reduced to strategic implementation or digital transformation. It confronts deeper questions about freedom of choice, identity and the frameworks through which societies structure meaning.

When I address this theme in conversations with senior leaders, I am aware that it can evoke discomfort, and sometimes even resistance. Not because AI is unfamiliar, but because it touches something more existential than technology. It raises the question of what remains uniquely human as systems begin to simulate language, judgement and emotional tone with growing sophistication. The challenge is not whether machines are becoming human-like. The real risk lies in humans gradually adapting themselves to the logic of machines, simplifying their own complexity to fit algorithmic grids.

This dynamic is not entirely new. In the 1960s, the ELIZA experiment already demonstrated how quickly humans attribute understanding and intention to even the most rudimentary chatbot (anthropomorphism) . Although users knew they were interacting with code, they responded as if a relational presence was involved. Today, the so-called ELIZA effect operates at a far more advanced level. AI systems simulate coherence, empathy and contextual awareness in ways that blur the boundaries between tool and interlocutor. In such environments, safeguarding human agency becomes a leadership responsibility rather than a footnote.

Self-leadership and Consilience in the In-Between World

Leadership in this liminal space therefore begins with self-leadership. It requires the psychological maturity to resist premature closure, to question the seduction of artificial certainty, and to remain accountable for human judgement in environments where AI can simulate coherence, empathy and even wisdom. Yet self-leadership alone is not sufficient, because agency is never only personal. It is shaped by the systems in which we operate.

To lead responsibly in such conditions demands consilience: the capacity to connect technological insight with ethics, economics, culture and geopolitics, and to generate meaning where complexity appears fragmented. Consilience safeguards agency because it allows leaders to see beyond algorithmic framing and connect what does not automatically align. This uniquely human competence does not lie in processing more data, but in forming connections that have not yet been mapped, and in imagining futures that are not simply extrapolations of the past.

From this broader perspective, it becomes impossible to separate the geopolitical dimension from the question of agency. AI is embedded in cultural assumptions and regulatory choices that shape how societies relate to risk, freedom and control. Children in some contexts grow up experimenting with AI, learning early that systems can be questioned, redirected and interpreted rather than simply followed. In other cases, they encounter AI primarily through caution and constraints. Over time, these differences shape how agency is understood and practised. The global divide is therefore not only technological. It is also an agency divide, influencing what kinds of liminal leaders emerge and how confidently they navigate complexity.

Facing the Mirror of Human Agency

AI thus functions as a mirror rather than merely a tool. It reflects our assumptions about intelligence, authority and value. The decisive question is whether we are willing to look into that mirror and consciously choose how we position ourselves within these emerging cognitive ecosystems. Leadership in liminal times is not about defending the past or surrendering to an algorithmically steered future. It is about cultivating the discernment to move between both, while safeguarding the depth and dignity of human agency.


Discover how we can help your team step beyond the in-between and unlock new possibilities.

Contact us now

The purpose of our Amplifiers, keynotes and supporting services is to help you reflect on your mindset, after which you can use the results to relate to and reframe it.

Both the free amplifier and the amplifier for professionals and teams are connected to the five drives that underpin your learning mindset But remember, the total score isn’t what matters, it’s the progress you choose to make that truly counts.

Subscribe

Enter your email below to receive updates.

Leave a Reply

Discover more from The Learning Mindset Organization

Subscribe now to keep reading and get access to the full archive.

Continue reading