Algorithmic Justice: Can We Trust AI to Decide Disputes?

AI GOVERNANCEMOST POPULAR

3/11/2025

Artificial intelligence has significantly impacted various sectors of society, and the legal field is no exception to this transformation. Recently, the startup Fortuna Arbitration created Arbitrus, an AI designed to act as a judge in private disputes in the United States. This innovation raises an essential question: to what extent can we trust a machine to make decisions that were once the prerogative of humans?

Arbitrus promises to resolve conflicts quickly, efficiently, and without the intervention of traditional courts. Its bold proposition aims to reduce costs, eliminate subjectivity, and ensure more predictable decisions. However, the idea of an “artificial judge” raises important concerns. Is this system truly impartial? How can we guarantee that its decisions are fair and transparent? And who will be responsible for overseeing its operation?

A recent LinkedIn survey revealed a divided public opinion: 47% of respondents said they would trust AI to judge their case, while 54% did not share that confidence. This result reflects the inherent dilemma of algorithmic justice: the benefits of automation clash with the risks of inadequate governance.

This article explores the advantages and challenges of implementing artificial intelligence in arbitration, analyzing how technology can transform the legal field and what measures should be taken to ensure that justice remains, in fact, just.

The Benefits of AI as a Judge

The traditional judicial system, although based on laws and legal principles, is still subject to subjective influences that can impact its decisions. In contrast, artificial intelligence operates based on objective data and patterns, free from emotions or unconscious biases. This characteristic of AI offers significant potential to reduce historical inequalities in the application of justice, making decisions more impartial and predictable.

Studies indicate that factors such as ethnicity, gender, social class, and even litigants’ appearance can unconsciously influence a human judge’s decision. There have been cases where Black defendants received harsher sentences than White defendants under identical circumstances. A well-trained AI could eliminate these distortions, ensuring that cases are judged solely based on facts and applicable laws. Additionally, decision-making consistency is another crucial advantage. While human judges may interpret the law differently, leading to contradictory rulings, an AI programmed to follow precedents and established patterns ensures greater coherence, reinforcing legal predictability.

Another significant benefit is cost reduction. The traditional judicial system requires substantial investments in infrastructure, personnel, and time. Maintaining courts, conducting in-person hearings, and managing bureaucratic processes make justice expensive and inaccessible to many. Arbitrus, operating digitally, eliminates many of these expenses. Litigants can present their arguments and evidence online, avoiding travel and legal fees, making dispute resolution more accessible, especially for small businesses and individuals who often forgo legal action due to high costs.

Beyond financial savings, AI also offers a major time advantage. The traditional justice system often moves slowly, with cases that should take months dragging on for years due to appeals, case backlogs, and bureaucracy. With AI, document analysis, pattern recognition, and legal application occur almost instantly. Disputes that would take years in traditional courts can be resolved in days, leading to greater efficiency and predictability for all parties involved. For businesses, this means avoiding prolonged financial losses and making strategic decisions with more certainty.

The promise of a faster, more accessible, and impartial justice system makes the idea of an algorithmic judge highly attractive. However, it is crucial to recognize the challenges associated with implementing such systems. If the governance of these technologies is not robust, we risk replacing human flaws with algorithmic errors that may be even harder to detect and correct.

The Risks of Algorithmic Justice

When Lack of Governance Becomes a Threat Despite its evident benefits, implementing an algorithmic judge like Arbitrus without a proper governance structure can lead to unpredictable consequences and severely compromise justice. Law is not an exact science; interpreting it requires sensitivity, value judgment, and a balance between rules and principles. Transferring this responsibility to AI without a strong framework of oversight and transparency risks creating an opaque system with no mechanisms for error correction or room for legal challenge.

AI governance in this context must be seen as a fundamental pillar, as we are not merely dealing with a technological tool but a decision-making entity capable of impacting lives, businesses, and fundamental rights. Without audit mechanisms, clear regulations, and appeal processes, a model like Arbitrus could become a systemic risk, undermining trust in justice and perpetuating injustices in an automated and silent manner.

Lack of Transparency and Explainability

One of the greatest challenges of algorithmic justice is the opacity of AI models, often described as “black boxes.” Unlike human judges, who provide reasoning for their decisions and can be questioned in higher courts, an unsupervised AI may produce rulings without revealing how it reached a particular conclusion. If a litigant wants to contest an Arbitrus decision, they will face an almost insurmountable barrier: how to prove an error if the algorithm’s reasoning is not transparent?

Justice, by nature, must be understandable and justifiable. A judicial decision cannot rely solely on statistics or mathematical correlations. Law requires argumentation, interpretation, and often the consideration of subjective factors that cannot be translated into code. Without explainability mechanisms, we risk accepting decisions blindly simply because “the machine said so.”

Algorithmic Bias

Automated Discrimination Although one of Arbitrus’s selling points is eliminating human biases, the reality is that AI is only as impartial as the data it is trained on. If the dataset used to build the model contains historical " it almost always does—the AI may reinforce discriminatory patterns rather than eliminate them.

This issue has already surfaced in other contexts. In AI systems used to predict criminal recidivism in the United States, algorithms assigned higher risk scores to Black defendants than White defendants under identical conditions. This happened because the training data reflected systemic inequalities in the justice system, and the AI simply reproduced those patterns without critical filtering. The same could happen with Arbitrus. If its dataset is primarily composed of biased judicial decisions, the result will be the perpetuation of these same errors—only now with a stamp of “algorithmic impartiality” that makes the problem even harder to identify.

Moreover, even with carefully selected data, algorithms can autonomously develop biases by interpreting patterns in distorted ways. Without frequent audits and a governance model that allows decision review and correction, we risk institutionalizing invisible discrimination under the guise of technological efficiency.

Lack of Oversight and the Limits of Automation

Another critical issue is the lack of adequate human oversight. Judicial decisions often involve nuances that go beyond a simple set of programmed rules. Family disputes, sensitive labor issues, and complex civil liability cases require a level of analysis that even the most sophisticated algorithm cannot fully grasp. When a human judge makes a decision, they can consider emotional, contextual, and social factors that AI cannot capture.

If a system like Arbitrus operates without human oversight, how can we ensure its decisions are just? How can we prevent serious errors without a correction mechanism? Fully automating justice could lead to rigidly applied decisions with no room for essential considerations that are central to the legal system.

Governance as the Foundation of Algorithmic Justice

Given all these risks, it is evident that AI governance must be a cornerstone in implementing an algorithmic judge. Without clear rules on transparency, auditing, and oversight, algorithmic justice risks becoming an automated tool of oppression rather than a means to democratize access to justice.

Developing a system like Arbitrus requires a rigorous commitment to best governance practices. This includes frequent audits to identify biases, ensuring that decisions are explainable, and allowing human review whenever necessary. Additionally, regulations like the European Union’s AI Act, which classifies judicial AI systems as high-risk, should serve as a model for building safer and more reliable systems.

Conclusion

The implementation of AI in arbitration, like Arbitrus, holds undeniable potential to make justice more efficient, accessible, and predictable. Eliminating human subjectivity, reducing costs, and speeding up legal processes is an attractive prospect, especially given the challenges of traditional systems. However, the risks associated with this automation demand strong and meticulous governance.

Justice is not merely the application of rules—it involves interpretation, judgment, and consideration of nuances that even advanced AI may fail to capture fully. The opacity of algorithms, the risk of algorithmic bias, and inadequate oversight are challenges that cannot be ignored. Neglecting governance in these systems could mean replacing human flaws with even harder-to-detect algorithmic errors.

To ensure AI becomes an ally of justice, organizations developing these systems must adopt exemplary governance practices. Regulation must keep pace with this evolution, ensuring that AI arbitration aligns with legal principles and citizens’ rights. The true revolution of AI in law lies not just in automating processes but in creating systems that inspire trust and uphold justice.