Have you ever scrolled through your recommended videos on YouTube and been alarmed at some of the content you found? If so, you’re not alone. A recent article in the Wall Street Journal detailed the results of a new study by Mozilla, in which participants flagged more than 70% of the videos recommended by YouTube for objectionable content, including misinformation and sexualization. Though YouTube’s algorithm has undergone repeated revisions in the past few years, their efforts appear to be floundering as users are faced with potentially misleading or harmful content. In 2020, YouTube came under fire simultaneously for too much and too little, with some complaining about speech censorship and others criticizing the spread of fake news. Throughout its brief history, YouTube has been criticized for leaving children vulnerable to inappropriate or predatory content. YouTube has a problem with sexual content, but is it because the algorithm works well, or doesn’t work well enough? In a world that often defends the production, sale, and distribution of pornographic content, on what grounds can YouTube ultimately stand strong in defending its viewers from unwanted sexual content?
The Algorithm Versus Human Judgment
The first problem facing YouTube as it seeks to minimize the amount of harmful content on its platform is the conflict between its algorithm and human judgment. According to The Verge, YouTube’s algorithm is intended to achieve two goals: “finding the right video for each viewer and enticing them to keep watching.” Based on metrics such as watch history, appeal, engagement, satisfaction, topic interest, competition, and seasonality, YouTube recommends videos for any given user’s YouTube homepage while ranking results for a given search and selecting suggested videos for viewers to queue. While they appear to be neutral, these inanimate systems essentially supplant human judgment by adjudicating what the user has access to and sees. When we give over our ability to sort and discover according our own organic judgments, this leaves very important decisions about morality in media up to the amoral outputs of an unknowing algorithm. When we elect an opt-in/opt-out model, we relinquish our ability to exercise our agency in a manner befitting the complexity of human life, with consequences for how we defend ourselves and our children from immoral and unwanted sexual content both online and in the public square.
The Role of Those Promoting Public Indecency
The second problem YouTube has to contend with is that even though the algorithm is intended to be neutral, there are agents behind the algorithms who cannot help but intervene in what people see and don’t see. They decide what information is worth seeing through content policing, and since they themselves live in a society more complex than that of the internet, they are likely to bring their judgments of culture to bear on what information users see. Sometimes, they permit objectionable content, about which they have not formed judgments, to flourish. For example, deep-seated conflicts in what constitutes acceptable public conduct may or may not be reflected according to the limitations of the algorithm or the whims of the agent behind it. Take YouTube’s cut and dry community guidelines regarding child safety, which reflect the normal view that child sexualization is always unacceptable because it “endangers [their] emotional and physical well-being.” And yet, at this very moment, we are seeing major publications giving platforms to activists who, contrary to the commonsense logic underpinning YouTube’s policy, insist children should see public displays of sex at LGBT parades. As national newspapers increasingly dignify such stances, on what grounds will YouTube continue to justify policing materials that sexualize or otherwise expose children to public sexual acts?
Forging the Path of a Moral Agent
The good news is that people of diverse backgrounds and experiences acknowledge that algorithms fail to capture real-world complexity, noting potentially adverse effects on users’ wellbeing. According to a respondent of the Pew Research Center’s survey on algorithms, we are creating algorithms “faster than we can understand or evaluate their impact”:
The expansion of computer-mediated relationships means that we have less interest in the individual impact of these innovations, and more on the aggregate outcomes. So we will interpret the negative individual impact as the necessary collateral damage of ‘progress.’
By the same token, the impact that early exposure to sexual content has on individual children is far too important to ignore for the sake of entertainment, intellectualizing, or politicking. The rights of children and adults to personal sexual integrity and the exercise of reasonable boundaries are worth defending, and can be left up to neither the self-interested agents who determine what content is worthy of policing, nor to the algorithms which lack the moral capacities of self-reflection and conviction. We human beings have a responsibility to defend the dignity of the vulnerable against violation by careless actors. As people who seek to promote the goods of family, marriage, and sexual integrity, we should be on guard against relinquishing our agency to algorithms which cannot think or act, and ensure that we continue to engage in the public debate over matters of importance – especially those matters of public decency which affect the health of our children and our culture.