What the law actually says — and doesn’t
Victims have a right to be heard. Arizona’s Constitution (Victims’ Bill of Rights), A.R.S. §13-4442 (impact statements in presentence reports), and the federal Crime Victims’ Rights Act, 18 U.S.C. §3771, all recognize it. None of those provisions micromanage the format. Written, spoken, audio, video — Arizona practice already embraces all of it. On paper, an AI-assisted video is just another container.
But containers carry contents, and contents carry evidentiary risk. Traditional victim allocution can describe harm and humanity. An AI avatar adds performance — facial expressions, intonation, and the persuasive aura of “the victim speaking.” When the words speculate about forgiveness, friendship, or what sentence the deceased might want, we move from memory into mind-reading.
Public defenders’ core objection isn’t to grieving families; it’s to attribution. If the avatar says “I forgive you,” the court can’t test whether the real person would have said that post-homicide. Trauma changes people. That’s not an abstraction; appellate courts routinely caution against punishment decisions fueled by unfair prejudice rather than admissible aggravation.
Wales, to her credit, tried to thread the needle — she avoided having the avatar explicitly forgive or request a number. Still, the format amplifies inference. Even neutral phrasing, when delivered by a lifelike face, can feel like a directive.