In 2015, Google was involved in a scandal which was the world’s first introduction to how algorithm systems have absorbed the racist and sexist stereotypes and work actively in perpetuating the same binary’s within technology. The scandal was about when typing “beautiful cornrows” on Google Search, Google showed a list of pictures of white women and men wearing cornrows, whereas when typing “ugly cornrows” in Google search, the entry listed out pictures of black men and women in cornrows.
Although Google apologized and vowed to keep a closer check, the issue of algorithm systems perpetuating racist and sexist stereotypes has existed for a long time, and now in the 21st century when technology has seeped into our daily lives and lays out how our personal and public information is laid out, it is crucial that steps are taken to ensure that a stand is taken on ethical issues before the creation of an algorithm system. Alexandria Occasio Cortez, the district representative of New York, highlighted the importance of this ethical issue in algorithm systems in 2016, in a discussion with writer Ta-Nehisi Coates at a Martin Luther King Jr. Day event where she said, “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
In the 21st century, the appeal of machines using artificial intelligence has surged in popularity as programs like Google, Facebook, Twitter and many more can now comb through millions of pieces of data and make correlations and predictions around the world. But machine learning has a dark side, if not given proper guidance and checks; it can implement the same racist and sexist biases within the technology sector and can influence society at a large. The law and order system has already included the use of algorithm systems to decide which person would be at a higher risk of posing a threat to public safety to grant them incarceration. A study carried out by ProPublica in 2016 on whether algorithm systems influence racism in law and order court hearings showed that instruments carrying public data have been heavily influenced by racist disparities that play a great role in defining experiences for people. The research was conducted on several states that have adopted the use of the risk assessment software COMPAS, which helps courtrooms, decide which criminals are more likely to re-offend and guided by these assessments, judges in courtrooms would generate conclusions on the future of defendants and convicts, determining everything from bail amounts to sentences. After critically analyzing the workings of the system on how it assessed 7000 people, it concluded that black people were three times more likely to be labeled as a high risk for committing the same crimes as white people.

Algorithms are a series of instructions that are written by programmers and often reflect upon the values and ethics of the company or the programmers writing the binary codes being installed in the system. Machine learning algorithms evolve and learn based on the knowledge humans are providing them. In such a case in 2015 Google apologized when black users complained that an image identifying algorithm in its photo application identified them as gorillas.
“Even if they are not designed with the intent of discriminating against those groups, if they reproduce social preferences even in a completely rational way, they also reproduce those forms of discrimination,” according to David Oppenheimer, who teaches discrimination law at the University of California, Berkeley.
Moreover, a study conducted by Harvard University also concluded that the use of algorithm systems in court and law systems can produce a biased perspective which creates problematic decisions regarding the life of inmates. A Harvard grad student compiled evidence of algorithm systems used in 49 out of 50 states which are used to influence bail, pre-trial and sentence hearings. The algorithm systems use personal information to assess whether a criminal was a high risk for posing a danger to society by using details such as family, crime, ethnicity, hometown, etc. It produced a report which is directly sent to the judges, and the research argued that this further increased the risk of biasness.
Although pro-algorithm companies have argued that the data obtained by the system is objective and free of biases, evidence proves that coding in software programs is not free of human influence and it still evolves and functions based on ethics and values of people around them. Recent controversies surrounding popular websites such as Facebook and Google have found elements of racism and sexism in algorithm patterns. A study from George Washington University showed that google images search for “CEO” showed less than 5 women in search results, whereas roughly 45% of executives in the United States are women. Google also came under controversy for having transphobic elements in its algorithm systems regarding its autocomplete feature when a google search “Are trans women” suggested the following option “Are trans women going to hell?”
Ethnic minorities have also been subjected to abuse and backlash through biasness in algorithm systems as popular social media applications such as Facebook and Twitter have been discovered to embody the same racist and homophobic characteristics in their data patterns. In 2017, Facebook was involved in a controversy when its algorithm patterns showed “to advantage white men over black children when assessing objectionable content”, according to internal Facebook documents, as reported by ProPublica. Research showed that using racist, derogative terms against black children was not barred, whereas the term “white men” would be labeled offensive. In 2017, Facebook was involved in another controversy when it was found to allow ad-purchasers to target “Jew-haters” as a category of users.
Surveillance technology has also been found to include racial and sexist bias which has wrongfully targeted and scrutinized people of certain races to a greater extent in shopping malls and other public places. In 2017, a research found that after an analysis done to identify criminals in CCTV cameras found several examples of bias infiltrated within identification software which are later stored into crime databases.
For example, the system identified more people belonging to Asian, African-American and South-Asian ethnicities and labeled them as potentially riskier than white people.

Having racist and sexist elements within algorithm systems can have a lot of consequences, particularly in the 21st century when all of society has geared towards relying on the use of technology for daily life activities.
In conversation with Vox magazine, Aylin Caliskan, a computer scientist, recalled how even simple jobs such as recruiting new employees has now been handed over to algorithm systems and now machine learning programs are checking resumes and if left unchecked, they can act on gender stereotypes in their decision making, “Let’s say a man is applying for a nurse position; he might be found less fit for that position if the machine is just making its own decisions,” she said.
“And this might be the same for a woman applying for a software developer or programmer position. Almost all of these programs are not open source, and we’re not able to see what’s exactly going on. So we have a big responsibility for trying to uncover if they are being unfair or biased.”
A research paper published in the journal Science uncovered the ethical biasness of a machine learning tool known as “word-embedding” which was already on the way to transforming the way computers interpret speech and text and were to be used in web search and machine translation.

Researchers had already found troubling implicit bias seen in human psychology experiments which are readily acquired by algorithm patterns. For instance, the word “female” and “woman” were closely associated with arts and humanities, whereas “man” and “male” were associated with math and engineering professions.
The algorithm system was also more likely to associate European American names with pleasant words such as “happy” or “peaceful” whereas African American names were seen as being commonly associated with unpleasant things.
Research has also shown that algorithm bias can have a profound impact on human behavior and relations. A previous study concluded that an identical curriculum vitae is 50 % more likely to be granted an interview invitation if the candidate’s name is European American than if the person’s name is African American.
Already, the health and medicine industry is gearing towards relying more on technology as algorithm systems are now being used to help doctors in finding the right course of treatment for their patients, and research is already being done on whether algorithm patterns can help build machines which can predict mental health crises.
To ensure that machine learning programs have a lesser chance of encountering bias and playing a harmful role in human relations, there needs to be more diversity in the computer programmers who write these codes, so that the bias can be limited.
Moreover, more awareness needs to be given on the ethical issues of algorithms so safeguards can be developed to check whether the data provided is influenced by historical prejudice or is built on facts. People who use these programs should at least be aware of the problems of ethical violations in algorithm patterns, and remember that a computer can produce a more biased result than a human.