Many firms lack AI policy for legal

An alarming 43% of companies lack an AI policy for legal, with smaller businesses facing heightened exposure
As AI continues to integrate into the legal sector, it appears that organisations are struggling to keep up with its rapid evolution. Research indicates that nearly 70% of legal professionals now incorporate AI into their work; however, a staggering 43% of these companies have yet to establish a formal AI policy, while more than half have provided no training on AI use. For small and mid-sized businesses, the absence of structured governance around AI can leave them significantly more vulnerable to risks associated with its use.
Zilvinas Girenas, head of product at nexos.ai, highlights the pressing issue, stating “When companies do not have a clear policy for legal AI use, people will still find their own way to use these tools”. This ad-hoc approach to AI adoption is particularly concerning for smaller companies, many of which lack dedicated legal operations, well-defined workflows, and essential oversight structures in place. With no mechanisms to monitor or audit AI interactions, they risk mishandling sensitive legal material that demands strict confidentiality and secure handling.
Legal teams have cited data security as their primary concern regarding AI, at a rate of 46%, followed closely by ethical considerations at 42%. Yet, many organisations remain ill-prepared to address these issues due to inadequate governance frameworks. Of the respondents, a mere 9% report having an actively enforced AI policy, which makes for an alarming oversight, especially in legal workflows that deal with sensitive information such as contracts and internal investigations.
The lack of formal guidelines often manifests in seemingly innocuous behaviours, such as contract managers using public chatbots for quick reviews or finance personnel summarising legal documentation without validations from legal counsel. “Once legal work starts moving through unapproved AI tools, sensitive information can leave a company’s normal controls without anyone noticing,” warns Girenas. The risk extends beyond inaccuracy; it is detrimental to confidentiality, privilege, and the overall security of legal data.
It is evident that the governance gap is widening, especially among smaller firms. While larger legal departments are increasingly implementing formal AI oversight, SMBs often operate in a vacuum, with employees frequently relying on personal judgement to address immediate challenges. Girenas notes, “The risk for SMBs is not reckless use of AI, but invisible workflow change." When such tools are integrated into daily operations without defined usage policies, the resulting inefficiencies undermine both governance and compliance, creating a perfect storm for legal exposure.
To mitigate these risks, SMBs need not develop an expansive legal team but instead focus on a few straightforward actions. Establishing a simple AI policy is crucial; this policy should define approved tools, delineate acceptable use cases, and stipulate that sensitive data must remain out of public AI systems. Additionally, companies should review and sanction tools prior to adoption and ensure that sensitive legal data is not fed into unapproved applications.
Moreover, designating an individual to oversee AI usage within the organisation can enhance awareness of potential risks and help instruct employees on the limitations of AI outputs. Finally, implementing a requirement for human review before any legal decision is made can be instrumental in maintaining oversight. By enacting these vital steps, SMBs can better safeguard themselves against the imminent risks of unregulated AI adoption.
