OP / ED
U.S. courts are increasingly using algorithms to assess the risk of repeat offenses, including in decisions about parole and sentencing length. One of the most well-known and controversial systems is COMPAS. In 2016, the Wisconsin Supreme Court refused to recognize COMPAS as a violation of human rights, even though its code is closed and internal parameters cannot be reviewed.
The ruling allowed the system to be used but not as the sole basis for sentencing. However, this decision sparked widespread criticism. It raised concerns about the transparency and logic of risk assessment. Studies showed that Black defendants, for example, were more often automatically classified as “high risk,” even without prior offenses.
COMPAS and similar systems are used in many states, but its influence on court decisions varies. Still, in many cases, these algorithms become central to judicial logic, which worries human rights advocates, especially given the lack of clear standards for regulating AI in justice.
The system works by analyzing factors like age, residence, criminal history and other social indicators. But the problem is that these algorithms are trained on historical data, which already carries racial and social bias. In doing so, they replicate and reinforce past errors, turning risk assessment into a “black box” that courts increasingly rely on when making decisions.
The closed code and lack of transparency make it nearly impossible to appeal a sentence influenced by AI. There have been cases where Black or Latino defendants were labeled “high risk” without real basis and had almost no chance to challenge it. No one can explain how or why the algorithm reached that conclusion. This creates a dangerous trend, where algorithms gain quasi-legal power and defendants are left without real guarantees of a fair trial.
This approach erodes trust in the justice system. Now, no one is accountable, not the judge, not the expert, not the algorithm. Instead of justice, we are automating structural racism and social inequality. We let algorithms decide human fate, reinforcing and amplifying past injustice.
The U.S. Supreme Court has yet to issue a clear legal opinion on the use of AI in courtrooms, leaving a vacuum. While this vacuum exists, algorithms continue to influence rulings without being tested for violations of basic rights.
The Sixth Amendment to the U.S. Constitution guarantees defendants the right to know the grounds for the charges against them. This right clearly clashes with AIbased sentencing, as the logic is hidden. We cannot call an algorithm to court, question it or hold it accountable. Yet it becomes a participant in the process — one that no one can review or challenge.
A new reality is forming in the U.S., where a person’s freedom depends on code hidden from public view. And where there is no transparency, there can be no justice. If inaction continues, the machine will speak for us.





