416-533-0040 ext 230
contact@rossmcbride.com

text

Issues with Artificial Intelligence: The Need for Regulation

  • July 6, 2021
  • Daniel Mulroy

The Law Commission of Ontario (LCO) has initiated a multiyear, multidisciplinary project to research the development, and impact of artificial intelligence (AI), automated decision-making (ADM) and algorithms on access to justice, human rights, and due process.

A recent publication from the LCO, Regulating AI: Critical Issues and Choices (link), discusses the issues that arise from the use of AI and ADM, and calls for significant regulatory reform in Ontario.

The use of AI and ADM technologies by governments and public agencies has seen extraordinary growth in recent years. This is not surprising. AI and ADM can improve government decision-making, accuracy, and fairness. All with the promise of reducing costs by making government more efficient and effective.

However, the increased prevalence of AI and ADM pose a number of risks. Chief among these concerns are data discrimination and black box decision-making.

Data discrimination arises because AI and ADM systems have a bias problem. It is well known that algorithms can encode biases. This is because AI and ADM systems rely on historical data that can be racist, sexist, ageist or discriminatory on a variety of grounds. And yet, these tools are increasingly being used throughout government, from determining government benefits to Canada’s criminal justice system. Currently, AI is cited as being used to assist in policing and to support in judicial decision-making during bail and sentencing hearings. All without sufficient regulation.

The United Nations Committee on the Elimination of Racial Discrimination (CERD) published a report on the use of AI in law enforcement (link).  CERD notes that the continued used of AI, big data, facial recognition risk “deepening racism, racial discrimination, xenophobia and consequently the violation of many human rights.” This is because the unregulated use of AI can “reproduce and reinforce existing biases and lead to even more discriminatory practices.”

Black box decision-making refers to a device that allows you to see the input and output, but provides no explanation for how the output, or decision, was made. It is therefore impossible to understand how AI reached the decision is has, due to the complexity of the system.

This raises significant concerns regarding procedural fairness and redress in the law. For example, the use of AI and black box decision making is becoming more prevalent in our healthcare system, as it has a higher accuracy in diagnostics than humans. In cases of malpractice, the law has evolved to assess the decision-making process of a medical professional. How can the law assess decision making if an unexplainable algorithm is used?

While the use of AI and ADM can offer benefits, there is currently a significant regulatory gap in Canada when it comes to their use, which poses a significant risk to human rights, access to justice, and due process, not name a few. As such, the LCO has called for proactive and comprehensive law reform in Ontario and across Canada. Reform that responds to the reality of bias and discrimination and explicitly complies with the Canadian Charter of Rights and Freedoms and human rights legislation.

Bakerlaw is currently thinking through ways in which AI systems can be challenged in court to provide greater protections for those at risk of AI-based discrimination. If you think you have been discriminated against by AI, contact us (link) to see if we can help. If you’re interested in the topic you can read more about this issue here (link).

Related: , ,