Designing AI

Sami Wurm
4 min readDec 4, 2022

Reading Response #9 to “Humans in the Loop: The Design of Interactive AI Systems” by Ge Wang & “Experimental Creative Writing With Vectorized Words” by Allison Parrish

Sami Wurm

Dec 3rd, 2022

Music 256A / CS476a, Stanford University

Reading Response: Designing AI

For this final reading/video response, I would like to respond to the principle of transparency and humanity in AI.

Before coming to Stanford, I worked in the AI/Cognitive Science Lab at Bucknell University for four years and then the Computer Human Interaction: Mobility, Privacy, and Security Lab at Carnegie Mellon University for 9 months. In both labs I worked on projects that made me explore not only how AI works, but the ethics of using Artificial Intelligence in different instances in society.

On the one hand, I learned about how ML can be used in music to make really fun tools, and how AI can be used to train robots to explore new environments! These implementations of AI were very fun and exciting to dive into. On the other hand, I learned about the countless ethical issues embedded in AI systems. For example: how facial recognition systems are significantly worse at recognizing people with darker skin, how AI systems used in housing markets are unlikely to approve black mortgage applicants for a loan, how youtube videos with LGBTQ+ tags are more likely to be taken down, how Apple’s credit system was trained to automatically give higher credit scores to men than women regardless of having the same financial standing, and how new image processors such as DALL-E-2 produce images that perpetuate harmful stereotypes such as usually producing only pictures of white men when asked to produce a picture of what a ‘doctor’ looks like. If you want to learn more about these problems, here is a great page of resources on algorithmic bias. These problems are being worked on… but these problems also seem to be endless.

After working in this space for a while, I felt less-than-hopeful that AI can progress in our society in a way that is more helpful than harmful. And, even when it is ‘helpful’, like when it can produce an image of Cool Cat in the style of Vincent Van Gogh’s “The Starry Night”, is it really necessary? Is it really art? The concept of how ‘helpful’ this artistically is is still up for debate (although, personally, I think it’s still art). And, in more utilitarian uses in society, it hasn’t been used in a way that doesn’t bring up huge ethical issues and complications along with whatever issue it is automating/making faster or ’better’. So why pursue it?

Well, I guess because people love new inventions and challenges and developing systems that we know we can develop. AI is undoubtedly going to keep progressing in our society, so I only hope that the ethics surrounding it can progress and be instituted as quickly. I think the most helpful tool to ensure ethical training-data-sets and uses of ai systems is, as noted in the ACM Code of Ethics and Conduct and in Humans in the Loop, transparency.

I love the methods noted to ensure transparency throughout the article Humans in the Loop, especially the notion of making sure humans are always ‘in the loop’ re: making decisions and using ai as a flexible tool, so that users have to understand what they are doing and what they are using. Sliders that dictate how much ai is being used also seem like a great idea. I think that to ensure more just developments in the AI field, our society must adapt and offer more education to develop the public’s technological literacy, encourage/enforce that companies become more transparent about how they are training their AI systems, and institute more solid codes of ethics and consequences for tech systems that are causing harm.

While this seems very difficult to achieve, I do believe that we can imagine a “High-tech, good-life” for our future world that includes just, artful uses of Artificial Intelligence. Also, lately I’ve been seeing more artful/wholesome uses of AI that have made me love it a little more again. For example, Allison Parrish’s creative writing, musical innovations like Holly Herndon’s Holly +, and an AI chat-box made to allow people to have real-time dialogue with their ‘inner child’.

I love the range and excitement of these projects, and they make me hopeful for what we as computer scientists can continue to bring to society. As with everything else we have learned about in this class, I am sure that there are ways that AI can be meaningfully, artfully designed.

--

--