Regulated AI
01 Sep, 2025
3 min read
Artificial Intelligence has already transformed consumer technology, but its impact in regulated industries, such as law, healthcare, and finance, is far more complex. These domains require not only intelligence but also accountability, compliance, and explainability.
Over the past year, we’ve been building Malakah, an AI hub specialized in Arabic legal systems. The journey has underscored a central truth: building AI for regulated industries is less about raw intelligence and more about trust, governance, and human oversight. In this article, I’ll share lessons from our experience and connect them with the broader challenges enterprises face when adopting AI in high-stakes contexts.
The urgency around AI regulation has accelerated worldwide:
These frameworks aren’t abstract; they shape how legal AI must be designed, audited, and monitored.
Despite challenges, the potential is massive:
Early adoption has already shown that legal AI can save lawyers countless hours while improving consistency and access.
Legal AI carries risks unlike any other industry:
These risks underline why compliance-by-design is not optional, but fundamental.
When developing Malakah, we encountered these challenges first-hand. A few lessons stand out:
From day one, we aligned our workflows with the EU AI Act’s principles: human oversight, transparency, and risk assessment. This ensured Malakah wasn’t just functional, but also compliant.
The Arabic legal corpus posed unique challenges: fragmented sources, inconsistent formatting, and linguistic nuance. We had to design robust pipelines for corpus structuring, document chunking, embedding selection, and domain-tuned indexing, ensuring retrieval was precise and legally faithful.
We made a deliberate choice: Malakah does not replace legal experts. Instead, it supports lawyers with evidence-backed suggestions, while leaving ultimate decisions to humans. For legal professionals, Malakah is there to move them up the value chain. For the public, it is an easy entry into the legal landscape, which can otherwise seem daunting.
Instead of a static retrieval model, we built agentic workflows: multi-step reasoning chains that dynamically select strategies (search, summarization, clause comparison, cross-document verification). This made outputs more reliable and context-aware.
Evaluation became our differentiator. We quickly realized that accuracy in regulated AI isn’t just about BLEU scores or token perplexity. We needed multi-layered evaluation:
This evaluation pipeline is still evolving, but it’s our strongest defense against hallucinations and untrustworthy outputs.
Our journey with Malakah reflects a blueprint for any enterprise exploring AI in regulated sectors:
AI in regulated industries is not a race for the smartest model. It’s a race to build the most trustworthy, auditable, and human-aligned systems.
With Malakah, we’ve learned that the future of AI in law won’t be decided by raw capability, but by how well we integrate compliance, oversight, and resilience into the DNA of our systems.
In regulated industries, trust is the true competitive advantage.
In this article
Established in 2021, we‘re a global IT Services provider delivering innovative business solutions and technology services worldwide.
Established in 2021, we‘re a global IT Services provider delivering innovative business solutions and technology services worldwide.