News
19 Jan 26

From Tools to Teammates: Architecting the New Dialogue

by
Mehdi Ghissassi
,
Chief Product Officer
,
Abu Dhabi, UAE

Eighteen months ago, ai71 was one person with an idea. Today, we're 170+ people building enterprise AI agents that give us a glimpse into how humans and intelligent systems will collaborate to achieve better outcomes.

The journey has been faster, more hectic, and more educational than we imagined. As we head to Davos for the World Economic Forum 2026, the theme "A Spirit of Dialogue" captures both where we've been and where we're going.

Building agents that bring the benefits of AI to humans and organizations requires dialogue at multiple levels: regulatory, technical, product, UI, legal and policy, financial - to name a few - and eventually between the agents themselves. The technology isn't just a tool anymore but a participant in how work gets done.

Beyond Capability: Building Systems You Can Trust

There's a useful parallel to AlphaGo that shapes how we think about building enterprise AI agents. Its breakthrough was about combining both learning from deep neural nets and clever search approaches. Neither alone could have discovered the novel strategies and fascinating moves that expanded human understanding of the game.

We see enterprise agents requiring a similar architecture, but with one critical addition: Guardrails.

Search: We worked with our sister Research team AiiR Labs at TII to build state of the art search and retrieval systems that work at scale, on premise, and on a wide variety of document types. GovGPT, one of our early projects, demonstrated we could reduce hallucinations and ground responses in government documents. Our GovGPT collaboration with Government entities was the only AI project from the Gulf selected for the Global AI Impact Summit in Paris last year.

Learning: There's significant excitement around complex multi-agent systems, but the hard problems aren't necessarily where various people think. It's not about orchestrating dozens of agents - it's about making an LLM in a loop reliable enough to trust. Our agents use the ReACT framework (reasoning and acting in cycles), but the engineering challenge is making it production-ready: stateful workflows that maintain context across multi-step tasks, tool integration with enterprise systems where APIs fail and rate limits matter, and error recovery that knows when to backtrack versus when to ask for help. This is the difference between answering questions and solving problems.

Guardrails: For enterprise and government work, capability alone isn't enough. AI changes the data privacy equation: where does data go, how is it processed, who sees it, does it train future models? A government ministry processing citizen data or a healthcare organization handling patient records can't use systems where that information might train models used by others.

This is why we're building infrastructure-agnostic and LLM-agnostic systems. It's a harder problem (different clouds, different models, different deployment constraints) but it's essential. AI sovereignty isn't abstract policy. It's a technical requirement for industries where data residency, model provenance, and operational control are non-negotiable.

We're building capability within guardrails, not despite them.

Beyond the Lab: Real Deployment, Real Impact

Today we're deploying these agent systems across contexts that test the architecture including, but not limited to:

A federal digital government authority will orchestrate how dozens of government agencies use AI to reinvent how they work, a scale and coordination challenge that requires absolute trust in data handling.

AgriLLM, an open-source agricultural AI brings expert advice to smallholder farmers’ fingertips all around the world thanks to our amazing partners including the Gates Foundation and the World Bank.

A public-sector legal authority is starting to use our assistant to streamline complex casework by automating analysis, tracking progress, detecting inconsistencies, and delivering analytical insights for decision-making.

Each deployment teaches us something new about the gap between AI capability and organizational adoption.

Beyond Technology/Product: Solving the Adoption Problem

Building agents is one thing. Integrating them into organizations is another. It requires understanding how work needs to change, not just how to automate it.

That's where our strike teams come in. Our strike teams are temporary, cross-functional groups who embed with design partners to solve adoption challenges. The philosophy: build with organizations, not for them. They shadow workflows, map data flows, and ship working prototypes. The goal isn't to gather requirements, it's to understand what actually happens versus what people say happens. They bridge the technical gap to see how work needs to reorganize.

This often leads to pivots. In one procurement process, we started by helping the procurement team validate documents. That worked, but the second iteration was different: we rebuilt it as an AI-powered submission form that eliminated the validation step entirely. The first iteration automates an existing workflow. The second iteration changes it.

Counterintuitively, scaling AI adoption might mean adding more humans, not fewer. One day, agents might help with this too. For now, it requires human judgment.

We started with ourselves. ai71 became its own first design partner, using agents daily across operations, research, and customer work.

I work alongside a Chief of Staff agent that provides visibility into what each team is working on and where they're encountering blockers. The research team has an agent that monitors sources and produces initial briefs.

If we can't rethink our own work with AI, we have no business helping others rethink theirs.

Beyond Today: The Questions Ahead

We are at the beginning of the autonomous agents era with a lot to figure out on the technology front. We are also at the cusp of the Agent as a colleague (soon to follow: report and manager agents) era which comes with numerous unanswered questions such as ‘how will men and machine nurture a spirit of dialogue?’

As we head to Davos, we're bringing more questions than answers:

How do we build responsible AI that respects sovereignty without sacrificing capability?

How do we navigate the research-to-production gap?

What are the real opportunities and risks as work fundamentally changes?

The path forward requires collaboration across borders, sectors, and disciplines. We aren't just deploying software; we are architecting a new era of collaboration.

The journey from tools to teammates has only just begun.