top of page

Closing the Gaps in Fragmented Support

Why We Started Hapi What does success look like after incarceration? For too many people, release means being dropped back into the same unstable housing, scarce employment, and limited social support that contributed to their original charges. Information is scattered, outdated, and difficult to navigate, especially during an already overwhelming transition.

Hapi was born from listening to people who were trying to stitch together services on their own. Their lived experience showed us a bigger problem: fragmented care, poor information sharing, and a lack of trauma‑informed, person‑centred planning. We set out to build a single, conversation‑based hub that anyone involved in re‑entry could trust, people leaving prison and the professionals walking beside them.

Why Reintegration Still Fails Too Many

Young, under‑educated, under‑employed: Most incarcerated women are under 34; men under 40. Many left school early and were unemployed or underemployed at arrest.

High trauma load: Histories of violence, substance use, and family instability are common, yet rarely addressed in re‑entry planning.

Same conditions, same risks: People are released back into the environments that contributed to their charges. Without timely, accurate support, “the old way” can feel like the only option.

Information deserts: Outdated lists, broken links, and gatekept knowledge force people to “figure it out” alone.

Overstretched staff: Case workers and parole officers juggle heavy caseloads and fragmented data, leaving little time for person‑centred planning.

No Real Dialogue: Information may be available, but people often cannot engage with it because they do not understand it. Those needing support face cognitive, financial, and emotional burdens, leaving little capacity to navigate complex systems. on their own.


Who We Serve and Why Their Voices Matter

Currently Incarcerated 

Successful re-entry starts from within correctional institutions. Inside perspectives on daily prison life, trauma, mental health struggles, social dynamics, and fears/needs around release. Most are under 40, with limited formal education, unstable work histories, and significant childhood adversity. Their insights shape how Hapi frames questions, surfaces resources, and scales across genders without erasing differences.


Formerly Incarcerated 

From those newly released, juggling housing, IDs, supervision, stigma, work, and transportation, to those further along who can reflect on long‑term stability (credit repair, family reunification, mental health). Their experiences guide our step‑by‑step planning now and our long‑horizon support features later.


Parole and Probation Officers

They see where plans break down: delayed paperwork, miscommunication, missed appointments. Hapi helps them coordinate accurate, timely next steps.


Corrections Officers

They understand the operational reality of prison life. Their perspective helps Hapi bridge the gap between inside processes and community handoffs.


Community Organizations & Frontline Workers

They deliver the services people need, but struggle with capacity and visibility. Hapi amplifies their programs and makes corrections smoother and faster.


Community Members & Grassroots Helpers

Neighbours, volunteers, faith groups, and people who quietly support re‑entry every day. Their stories highlight how community involvement strengthens reintegration.

ree

Beyond the Algorithm: What “Human-in-the-Loop” Really Means for Justice-Oriented AI

AI is often portrayed as an impartial force, neutral, efficient, and inherently progressive. But in practice, AI is neither neutral nor automatically beneficial. Its impact depends entirely on how, and by whom, it is designed, deployed, and governed. In public sectors, where historical inequalities are deeply entrenched, this dependency takes on high stakes. Maybe responsible AI isn’t just about fixing bugs in code; it’s about fixing broken systems.

The justice system, for example, is not a neutral landscape. It has long prioritized punishment over restoration, control over care. Without a shift in approach, AI risks becoming a digital extension of these same punitive dynamics, amplifying surveillance, reinforcing bias, and deepening mistrust.

Justice data suffers from a “missingness through mistrust” dynamic. A 2024 systematic review of 28 studies on sexual‑assault case attrition found that “lack of trust in the criminal justice system” is a dominant reason survivors never file a report, leaving their experiences and the data they represent outside official records 1. At a broader level, empirical work shows that communities who feel unfavourably toward local police are significantly less likely to report crimes they witness, further hollowing out the crime statistics that power risk‑assessment or predictive‑policing tools 2. When AI models are trained on these partial, mistrust‑shaped datasets, they misestimate risk, misallocate resources, and reinforce the very inequities that prompted people to withhold their data in the first place.

From the perspective of computer scientists, we’re naturally drawn to AI’s promise of scale and precision. But we also need to be honest about the contexts in which these tools operate. That’s where human-in-the-loop (HITL) approaches come in—not as a technical afterthought, but as an ethical imperative. It means integrating lived expertise, front-line experience, and community values throughout the entire AI lifecycle: from problem framing and data collection to model building and system evaluation.

When AI Goes Wrong in Justice

COMPAS risk scores

What happened – Northpointe’s COMPAS tool is still one of the most‑used recidivism predictors in U.S. courts, yet fresh scrutiny keeps showing racial fault‑lines. A study injected controlled “noise” into the COMPAS data and found systematic reliability gaps between White and non‑White defendants: when the exact same factual errors were inserted, the model’s outputs for people of colour swung further and more often than for Whites 3. Earlier work has shown higher false‑positive rates for Black defendants, so these reliability gaps compound accuracy gaps.

Gap –Judges usually see only the final “low/medium/high-risk” category; defendants rarely see anything. There is no built‑in mechanism for a caseworker, lawyer, or defendant to challenge a score before it influences bail, sentencing, or parole. Because the model and its weights are proprietary, independent auditors cannot probe why the score moved after an entry was corrected. In short, the humans who should close the loop, judges, counsel, and defendants, lack both visibility and authority.


Ohio actuarial‑risk assessments (ORAS, others)

What happened –Ohio courts deploy a suite of actuarial‑risk‑assessment (ARA) tools at pre‑trial, sentencing, and supervision stages. A 2024 report from Ohio State University surveyed bench officers statewide and concluded that 60 % of judges feel they have not received adequate training on how to interpret or challenge ARA4. Without that fluency, many judges either default to the tool without question or revert to instinct, negating any promised consistency.

Gap –Training and feedback loops are shallow: there is no standard refresher once the tool is installed, no user dashboard showing how a judge’s overrides affect outcomes, and no quarterly audit that pairs model errors with real‑world recidivism or failure‑to‑appear rates. The result is a brittle workflow: judges must trust (or ignore) numbers they cannot interrogate, and defendants cannot point to a transparent error chain.


Facial‑recognition–driven arrests (Florida, 2024)

What happened – Jacksonville Beach police fed grainy CCTV imagery into a regional facial‑recognition hub; the system spat out a “93 % match” to a 51‑year‑old man named Robert Dillon. Nine months later, he was arrested, only for prosecutors to drop the case when no corroborating evidence emerged 5. This joins at least eight other U.S. wrongful arrests traced to face‑ID hits.

Gap –Department policy says a match is only an investigative lead, yet officers treated it as probable cause. There was no mandatory second‑review step, no photographic line‑up vetted by an independent unit, and no requirement that the supervising attorney sign off before an arrest warrant was issued. Automation bias overrode policy safeguards because the loop stopped at the screen that said “93 %.”


Why Collaboration Matters

Building useful AI for justice is not just a coding exercise; it is a collective effort. Engineers know the math, and policymakers write the rules, but neither group lives the daily reality of incarceration or re‑entry. Real insight comes from the people who do: those who have been through the system, the officers who supervise them, the social‑service teams scrambling for resources, and the advocates who keep everyone honest. When all of them help steer the project from day one, the result is technology that understands context.

Design‑justice calls this co‑creation. Designers facilitate the work, but don’t authorize it. In justice settings, that means:

  • Data‑storytelling workshops that let communities annotate what spreadsheets miss, housing insecurity, unrecorded trauma, and informal support networks.

  • Problem‑framing circles where people on probation, family members, case managers, and policy leads define the questions worth solving.

  • Standing oversight councils: Including formerly incarcerated members, that review metrics, approve updates, and keep the tool responsive to real‑world change.

  • Live walk‑throughs of model outputs so participants can surface surprises or red‑flag bias before anything goes live.

This kind of collaboration pays off on the ground. In Canada, nearly seven in ten parole officers already say their caseloads feel unmanageable; when an AI system triages routine questions and leaves human judgment for the nuanced calls, both workload and error rates drop. 

References:

Comments


Get in Touch

We'd love to hear from you.

  • LinkedIn

Edmonton, AB

Working Worldwide

white logo (3)_edited.png

Duologue Systems

 

© 2025 by Duologue Systems

 

bottom of page