Artificial intelligence (AI) is bringing an array of opportunities and challenges to businesses. Not least of these new changes is legal risk.
DX Journal spoke with Carole Piovesan, Litigator and Team Lead on AI, Privacy, Cybersecurity and Data Management group at McCarthy Tetrault, to find out how AI will affect businesses, who should be addressing the legal risks AI poses to society and how the legal practice itself is being affected by AI.
DX Journal: To what extent does AI pose a legal risk to enterprises looking to incorporate this technology?
Carole Piovesan: Since there’s a lot of talk about AI, let me start with a very short definition. AI is an umbrella term that encompasses different processes such as natural language processing (like Siri), image recognition, machine learning and deep learning. The “AI” that draws a lot of attention these days – both negative and positive – is usually machine learning and deep learning, both of which involve self-teaching and self-executing systems.
AI offers a lot of opportunity for businesses looking to improve efficiencies and cut costs. Depending on the purpose of the system, however, it can present certain legal risks.
For instance, AI systems require lots of data to train systems on how to accurately perform a certain task. Access to data is largely governed by the Personal Information Protection and Electronic Documents Act which sets parameters for how to legally access, store and use data, among other things. Companies need to understand how to comply with privacy legislation to avoid reprimand or sanction. In addition, amassing huge quantities of data could lead to competition issues around data monopolies, among other things.
Then there is the issue of liability where a system does something it shouldn’t have done or doesn’t do something it should have done. In self-teaching and self-executing systems, questions arise as to who should bear liability for harm caused by the system. This leads to the corollary issue of proof. The pathway to a particular output for these systems is notoriously difficult to understand.
There is a whole movement around increasing the interpretability of AI systems, particularly where systems are used in matters that directly affect human life such as medical diagnosis and criminal law.
DX Journal: Which industries are likely to be most affected by the legal risks that AI brings to businesses?
Piovesan: I wouldn’t think of it as industries that will be most affected by AI, but tasks. Every industry has the potential to be affected by advanced technologies including deep learning systems. The idea is that AI systems can perform routine, repetitive tasks better, faster and cheaper than humans. Every industry has processes that are repetitive in nature and that can be improved by AI.
That said, in the near-term, industries that are expected to be deeply affected by AI include transportation, medicine, law, insurance, accounting, manufacturing, retail, financial services, among others. Sectors that are less obvious but that are benefitting from AI include oil and gas, and mineral extraction, in which AI is being used to more safely and efficiently extract natural resources.
DX Journal: What should businesses focus on as they begin to onboard AI tools?
Piovesan: Depending on the purpose and complexity of the technology, business will want to develop a better understanding of AI technologies, as well as risk management strategies for incorporating more sophisticated technologies into their operations. Increasingly we’re seeing an interest in creating legal assessment tools specific to AI technologies.
DX Journal: Who is addressing the legal risks created by AI in society?
Piovesan: Many different actors have a role to play in ensuring safe, beneficial and productive innovation. I would say that provincial and federal governments need to kick-start a dialogue with the academic and private sectors around issues specific to AI technology. One critical area for greater discussion is with respect to the interpretability of AI systems, and requirements for explainability for systems used with direct impact on human rights and well-being.
The EU and UK are examples of jurisdictions that are undergoing regular consultations to inform a possible regulatory framework on AI. Canada has also done such consultations but I think more is needed.
The academic and private sectors are tasked with advancing innovation but, as we have seen with the 2017 Asilomar principles, for example, they can also lead in defining appropriate standards and codes of conduct to promote responsible and productive research and innovation.
Canada is well-situated in the AI field. Some of the foundational thought-leaders of deep learning are based in Canada. We have tremendous academic talent here.
Plus, the federal government announced $125 million in research and development focused AI and nearly $1 billion over 5 years to promote innovation superclusters.
These announcements made international headlines which is important to show the world that Canada is the place to be for research and innovation (not to mention we are known for having the second largest tech sector outside Silicon Valley).
Finally, Canada is a well-respected international player and AI is technology will require a coordinated international approach, especially with respect to data sharing and in the military and defence contexts. Canada is very well placed to add-value to any international dialogue on AI.
DX Journal: How is AI changing the legal practice itself?
Piovesan: AI presents tremendous opportunity in the legal profession. As lawyer become more exposed to and comfortable with the technology, we will increasingly incorporate AI into all aspects of our practice.
The law firm can use AI to streamline internal processes such as mandate scoping. By understanding how much a typical piece of legal work costs, law firms can more quickly and accurately estimate new work that is similar in scope.
At my firm, McCarthy Tétrault, we’re using AI in M&A due diligence work. In so doing, we’re able to complete due diligence for an M&A transaction in a fraction of time and for a fraction of traditional costs.
AI is also being introduced on the litigation side through systems that can complete legal research of concepts. It is also being used in e-discovery to increasingly categorize documents and predict relevance.