Skip to content
#

human-centered-ai

Here are 37 public repositories matching this topic...

This GitHub repository contains the complete code for building Business-Ready Generative AI Systems (GenAISys) from scratch. It guides you through architecting and implementing advanced AI controllers, intelligent agents, and dynamic RAG frameworks. The projects demonstrate practical applications across various domains.

  • Updated Aug 9, 2025
  • Jupyter Notebook

A curated list of awesome academic research, books, code of ethics, courses, databases, data sets, frameworks, institutes, maturity models, newsletters, principles, podcasts, regulations, reports, responsible scale policies, tools and standards related to Responsible, Trustworthy, and Human-Centered AI.

  • Updated Dec 14, 2025

A deep exploration of Algorithmic Empathy, the next frontier in AI understanding. This project examines how machines can learn from human fallibility, model disagreement, and align with moral reasoning. It blends psychology, fairness metrics, interpretability, and co-learning design into one framework for humane intelligence.

  • Updated Nov 5, 2025

An in-depth exploration of the rise of human-centered, interactive machine learning. This article examines how Streamlit enables collaborative AI design by merging UX, visualization, and automation. Includes theory, architecture, and design insights from the ML Playground project.

  • Updated Nov 3, 2025

CognitiveLens is a Streamlit-powered analytics tool for exploring alignment between human and AI decisions. It visualizes fairness, calibration, and interpretability through metrics like Cohen’s κ, AUC, and Brier score. Designed for ethical AI, bias auditing, and decision transparency in machine learning systems.

  • Updated Nov 5, 2025
  • Python

This article reframes pricing as a negotiation rather than a prediction, showing how price emerges from tensions between product reality, market dynamics, and buyer behavior. It introduces negotiation-aware ML, value decomposition, and equilibrium modeling to build transparent, human-aligned pricing systems.

  • Updated Dec 11, 2025

A systems-thinking essay that explains why failure rarely happens suddenly. It shows how slow drift, accumulating pressure, and weakening buffers push systems toward collapse long before outcomes change, and why prediction-focused analytics miss the most important phase of failure.

  • Updated Dec 15, 2025
seed-lab

Seed Lab is a research workspace exploring language model behavior, symbolic reasoning, and emergent communication patterns through structured personas, simulated interactions, and interpretability-focused experiments.

  • Updated Nov 15, 2025
  • Python
Conscience-by-Design

The Conscience Layer Prototype, created by Aleksandar Rodić in 2025, establishes a research foundation for ethical artificial intelligence. It brings moral awareness into computation through principles of truth, human autonomy, and societal responsibility, defining a transparent and accountable form of intelligence.

  • Updated Dec 1, 2025
  • Python

An early-warning system that models disasters as instability transitions rather than isolated events. It combines force-based instability modeling with an interpretable ML escalation-risk layer to detect when hazards become disasters due to exposure growth, response delays, and buffer collapse.

  • Updated Dec 15, 2025
  • Python

A systems-thinking essay that reframes failure as a gradual transition rather than a discrete outcome. It explains how pressure accumulation, weakening buffers, and hidden instability precede visible collapse, and why prediction-based models arrive too late to prevent failure in human-centered systems.

  • Updated Dec 14, 2025

Improve this page

Add a description, image, and links to the human-centered-ai topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the human-centered-ai topic, visit your repo's landing page and select "manage topics."

Learn more