In the video, Caroline Sinders talks about AI being a tool to research digital issues like online harassment, and she appeals to treat design to be more sensitive with digital bias and harassment. I remembered her algorithm project in the BabyCastle arcade exhibition. You can train the AI the words that you think are abuse or troll, which is interesting and enlightening. It is like a nonsense talking person that speaks the words that he doesn’t know the meaning of and we need to tell him that it is hate speech or harassment. But eventually the person becomes a bastard with all the trolls in his mouth. Therefore, I think the algorithms that many apps are using is to learn the user to show the contents that the user likes. But it can be misleading to the person’s worldview or life values. Like if the person likes to abuse or see the hatred speeches on twitter, he will see more of the similar content filtered by the algorithm. This can be horrible because our mainstream apps cannot lead a positive and healthy online community and help the social society, but focusing on the user stickiness and making money. This is what Caroline indeed says that design should eliminate capitalism, which is actually a paradox because the companies are in capitalism and making money is their goal.
I think this is really thoughtful because when considering a product, it is important for the designers to not only consider the usability but the real situation that the design would lead the users to. If a user runs into an awkward situation, how can the design help him feel safe. This is not usually being considered when we study UI/UX design, but is very important and subtle to verify, because it is not easy to know what would happen when the users use the product. Indeed, some algorithms are now researching and making progress on recognizing abuse, harassment, hate speeches, but as designers, we seldom think of design with algorithms and how to use algorithms to deal with real problems.