Bias Detecting AI banner
Bias Detecting AI icon

Bias Detecting AI

I built this to test how language models react to demographic cues in hiring, admissions, and other sensitive decisions, then compare the results and revise the prompts behind them.

Bias Detecting AI hero media

This is a practical evaluation tool I’m using to check whether language models treat similar people differently when the only thing that changes is a demographic signal. It brings together structured bias tests, side by side model comparison, detailed result review, and prompt rewriting, so I can look at fairness as something concrete and inspectable rather than abstract.

The clearest audience is people choosing, auditing, or shaping models before those models are used in higher-stakes workflows. That could be product teams, AI governance work, research, operations, hiring teams, or admissions related review. What makes it useful is that it does not stop at scoring. It also helps me clean up prompts, reuse safer templates, and recommend models based on the kind of task I actually care about.

A few things still feel open, and I think that is worth saying plainly. This reads more like a focused internal evaluation desk than a broad shared platform, and more like an early, working system than a fully polished brand experience. The character is careful, measured, plainspoken, and a little austere, which fits a product that is really about close comparison and accountable judgment.

Project Details

Active (Development)
StartedSep 2025
Last UpdatedApr 2026

Category

Tool

Domain

Technology

Tags

LLMModel comparisonPrompt reviewAIBias auditing

Work Logs

See all logs

Created new project entry for Bias Detecting AI

Sep 16, 2025 · 9/16/2025, 11:35 PM

Created new project entry for Bias Detecting AI and added the 'Project Starter' milestones and tasks.

Duration: 1hSession source