Let me ask you a question – what do you think of the picture below? They are two independent snapshots. The left side is a cosmic image of the world, while the right side is text on "Analytics in Nonprofits". Both are AI-generated. All I did was select "world image" (for the image on the) and set "analytics in nonprofits" (for the text on the right).
Let's try the same question: How do you feel about AI regarding credit scoring, facial recognition based on color and ethnicity, algorithmic surveillance, and decisions on livelihood-related opportunities via algorithms?
I know there is a lot to unpack on both these questions. It is neither straightforward nor simple. But I encourage you to take a few moments to realize the significance of the AI we already have, use, and are probably addicted to around us.
Let me set some context on what you and I are exploring today.
For the past few days, I have been working on designing two workshops for our community – Towards Human-Centric AI and Advancing Equitable Visualizations. Neither has been easy on my schedule, but both make me read a ton! When it comes to AI, I have come across multiple examples of both scenarios – when leveraging AI improved something differently and when AI led to more harm than good. I am yet to decide if this is a clear binary choice like black & white or 1 & 0. Can this even be a binary choice?
Today I want to talk to you about how the relationship between us and AI should be, what "human-centric" there even means, and how we get to that stable, healthy relationship with AI. And I want to do it by sharing 7 fundamental truths I am realizing and recognizing more deeply after each reading.
Now, are these truths permanent? Nope. Are they one-size-fits-all? Nope. Can they evolve? Of course, in fact, they must.
The purpose of these tenets is to guide you and me when we are exploring our feelings and learning to build trust in AI. It is to remind us that moving towards AI is our individual and collective conscious choice for a better world rather than a forced societal push to accept the next big thing. So let us use this list to understand the work ahead of us.
Here goes the list:
#1. Generally speaking, AI is not an enemy, competition, inferior, or superior to us (humans). Therefore, we need to be careful in the choice of words when describing AI.
It is a human-designed technology that should be treated as a collaborator. AI and we need an intentional, healthy, and collaborative relationship.
#2. The AI we design will clearly state and share the explicit choices made during the scope of the problem being solved, data collection, and the perceived and intended users (not just in terms of their job roles but overall identity).
This explicit stating clarifies the purpose of AI on what is prioritized vs. what remains deprioritized.
#3. The AI we build needs to be socially and racially respectful.
For example, physical surveillance designed to serve binary genders only or a legal justice system that tags offenders disproportionately amongst races and ethnicities is a clear example of failure to center humans.
#4. The AI we build must not perpetuate harmful biases.
For example, an AI-based facial recognition system trained on majority images of the white population will flag pictures of people from other ethnicities as "anomalies" when tested/validated.
AI can carry biases from any of the following sources, and we must design appropriate actions for it to take in the event of biased outcomes: underlying data, data cleaning/feature engineering, modeling design, testing/validation, and access/medium of the output.
#5. The AI we build must be tested thoroughly during and after deployment. This continuous testing is not limited to the designers and funders only. This testing must be conducted collaboratively with the communities it is expected to impact.
Continuous monitoring and evaluation will allow capturing all those nuanced real-time complexities that were missed in the raw data. Such an evaluation will enable us to form a balanced, collaborative relationship with AI.
#6. The AI we build must be questioned, challenged, and embraced appropriately. So, all those benefiting from it are held accountable with transparency.
#7. The AI we build must not be valued just in the quantified numbers (e.g., dollars saved, dollars earned, users obtained, etc.) or as an input to increased efficiency, innovation, market domination, or capital accumulation. It's valued in the holistic well-being, safety, and non-extractive growth of those impacted by the outcomes of this AI.
*********************************
Through my ongoing research, I am realizing (with exhaustion) that there are no clear definitions, expectations, or understanding of what constitutes "human-centricity" in AI. To some, it is about fairness; to others, it is about all the claimed/proven good AI can create for humans. Neither resonates with me in totality. Therefore, you and I need this list of tenets.
This list is a start to defining what AI could be, should be, and must be for both you and me – so it acknowledges and accounts for our commonalities and differences.
*** So, what do I want from you today (my readers)?
Today, I want you to share your thoughts on the words "human-centric" AI. If and what does it mean to you?
Comments