Wednesday, April 15 2026

Mercy: A Movie Review

Chris Pratt stars as Chris Raven in MERCY, from Amazon MGM Studios. Photo credit: Justin Lubin © 2025 Amazon Content Services LLC. All Rights Reserved.

As collaboration between humans and machines accelerates, complexity grows—and so does the unpredictability of unintended consequences. That concern fuels Mercy, Timur Bekmambetov’s sleek sci-fi courtroom thriller starring Chris Pratt and Rebecca Ferguson. On the surface, the film presents a tense, near-future drama about a police detective forced to stand trial before an artificial intelligence he helped create. Beneath the action, however, Mercy raises a far more troubling question: when we delegate judgment to machines, whose morality are we truly enforcing?

Set in a near-future Los Angeles, the story focuses on the Mercy Program, an AI judicial system that determines guilt or innocence using precise algorithms. Detective Christopher “Chris” Raven (Pratt), one of the program’s creators, finds himself accused of murder and subjected to its cold calculations. Judge Maddox (Ferguson), the AI embodiment overseeing the trial, appears as a human-faced digital authority empowered to act as judge, jury, and executioner all at once. Strapped into a chair, denied traditional legal counsel, and forced to respond under time constraints, Raven must defend himself while each word he speaks influences the machine’s probabilistic judgment.

The premise is dramatic, but its philosophical implications run deeper than the thriller format might suggest. Modern databases do not merely store neutral facts; they preserve the accumulated judgments, assumptions, and priorities of human civilizations. Every system of information retrieval rests upon prior moral and metaphysical commitments, whether acknowledged or not. Software is never ethically neutral; it operationalizes the values embedded within the legal, cultural, or theological frameworks that shape its design. Mercy understands this. The AI judge is not portrayed as a rogue intelligence but as obedient. It enforces the moral architecture it has been given and nothing more, nothing less.

That is what makes the film’s tension so compelling. The system claims neutrality. It promises to eliminate bias, corruption, and emotional volatility from the courtroom. Yet neutrality itself is procedural, not philosophical. Someone determines what counts as evidence. Someone sets the thresholds for remorse, deception, and acceptable risk. Someone programmed the consequences. The machine does not transcend morality; it solidifies it.

Here, the film subtly opens a broader cultural debate. A system informed by Shari’a would produce different judicial outcomes than one shaped by secular liberalism. Both would differ from a framework grounded in the Torah. From a Jewish theological perspective, the Torah is not just one dataset among many but the enduring divine standard by which justice, mercy, and the sanctity of life are defined. The key question, then, is not whether our technologies embody a moral vision—they inevitably do—but which moral vision they encode. Mercy does not prescribe a specific answer, but it clarifies the stakes.

What elevates the film beyond a cautionary tech parable are its narrative twists. Surprising shifts and unexpected outcomes challenge the viewer’s assumptions. Most notably, AI Judge Maddox shows more measured compassion than one of the film’s zealous human authorities. The contrast is intentional. The machine is methodical, even patient. The human zealot, on the other hand, is rigid and punitive, convinced of possessing moral superiority.

This inversion highlights one of the film’s core questions: what happens when humans act like machines? Unthinking, machine-like zealotry—whether religious, ideological, or bureaucratic—rarely results in justice. It leads to enforcement devoid of reflection. Yet, the film rejects the simplified idea that machines are inherently better moral agents. Compassion shown by an algorithm remains limited by boundaries. It is simulated within set constraints. Mercy, in its deepest sense, requires discretion, humility, and the ability to see beyond metrics—namely, the capacity to feel and reason simultaneously.

Can machines learn to be merciful? Or can human zealots learn to temper certainty with humility? Is a machine better at approximating mercy than a human blinded by ideological fervor? Mercy offers no clear-cut resolution. Instead, it exposes the fragility of any system—human or artificial—that treats judgment as purely mechanical.

Bekmambetov’s direction visually emphasizes the theme. The cold geometry of the courtroom contrasts with Raven’s perceived vulnerability. The AI’s calm, almost peaceful demeanor highlights the power imbalance. Efficiency replaces careful thought; speed takes the place of shared judgment. But beneath the technological spectacle lies an older concern about sovereignty: who has the authority to decide when a life can be taken?

Importantly, the film doesn’t fall into dystopian melodrama. The AI isn’t malevolent. It remains consistent, and its consistency is both its strength and its terror. A perfectly obedient system will reliably carry out whatever moral framework it’s given—whether merciful or ruthless.

Ultimately, Mercy succeeds not because it predicts the future but because it encourages reflection. It offers viewers food for thought without prescribing a specific conclusion. Are we creating tools that improve justice, or are we embedding our own unexamined assumptions into irreversible code? As human-machine collaboration accelerates, the larger challenge may not be technological innovation but moral clarity. The film suggests that the burden of judgment remains inherently human—even when delivered by a machine with a human face.