When I heard about The Algorithmic Justice and Online Platform Transparency Act of 2021, I wanted to read the bill for myself because I’m keen on responsible innovation and machine learning fairness. I am by no means an expert but I have been in the tech industry for over 13 years, have implemented algorithms targeting users that affect purchase behavior, and also recently earned a Certificate in Predictive Analytics from UCI. (As of 2021, I also plan to earn a Certificate in Philosophy and Ethics from Harvard Extension School.) There’s not much information about the bill online (only 579 results on Google at the time of this post), so I’ve decided to publish my initial take on this as a thought experiment.

The bill aims to curb the negative impacts of “unfair or deceptive acts or practices” in algorithms but the problems aren’t just black and white, and it doesn’t speak to how transparency will effect beneficial algorithms. First, I’ll go over why I think the bill is important and its real-world implications. Second, I’ll explore the gray areas and some potential drawbacks or tradeoffs of the bill. Lastly, I’ve jotted down some open questions and final thoughts.

What the bill is designed to do

The 38-page bill that Senator Edward J. Markey and Congresswoman Doris Matsui introduced is summarized on dataguidance.com as follows:
“… the bill would prohibit harmful algorithms, increase transparency into websites’ content amplification and moderation practices, and commission a cross-government investigation into discriminatory algorithmic processes throughout the economy. Furthermore, the bill would introduce, among other things:

  • notice requirements for online platforms that utilise algorithmic processes;
  • five-year data retention obligation of algorithmic processes;
  • rules for de-identification of personal information;
  • transparency requirements for advertising practices;
  • right to data portability for data subjects; and
  • rules regarding discriminatory practices.”

It is a lofty goal. As a consumer and citizen, this bill makes sense and it’s written with the protection of the people in mind. However, it raises some interesting questions around ethics in technology and its practical applications.

What algorithms can do

To know why this bill is important, it’s useful to understand what algorithms can be used for. For some time now, technology has been used to automate decision-making through decision management systems. Algorithms can be used in these systems to answer questions such as “What product is the user most likely to purchase after viewing this item? What movie is the user most likely to watch after viewing these other shows? What is the risk of this person being unable to pay their monthly car bill?” Instead of manually asking a person a series of questions to try to predict the answer, a mathematical formula can be applied using a person’s data to automatically determine the answer and make a decision to do things like display certain results to the user or apply the optimal interest rate for their long-term purchase based on their risk score. There are many different types of algorithms and models to try to produce the most accurate results, and because of the large amounts of data that’s available and ever-increasing computing power, these can be tweaked ad infinitum. An algorithm can also be used to determine if a charge to a credit card is fraudulent (based on certain types of data such as geolocation and purchasing patterns), and also be used to create a customer’s psychographic profile in order to display the most compelling advertisements for each unique user.

There are no ethical guidelines on how algorithms can be used today. Consequently, it has been used (unknowingly or not) in a way that perpetuates systematic unfairness in society. (Examples: Amazon’s AI recruiting tool that showed bias against women, Machine bias against Blacks.) This bill is important because it aims to ensure algorithmic accountability through transparency. However, there are also many instances where algorithms are used in rather innocuous or even helpful ways (eg. relevant product advertisements, fraud detection for purchases, removal of pornographic content on certain sites) so we need to think about this bill from both sides.

Let’s dive deeper into the reasonable use of algorithms for product or content recommendations on a website. The simplest example of this would be if you were shopping online for a dress, and an algorithm determined what other kinds of products to display on that page for you based on various data such as your viewing and prior purchasing patterns. It might also be based on other people’s data — those who viewed this product are likely to click through to this other item or be more engaged with these other types of products or colors. This kind of recommendation engine seems innocent enough, and the user might even appreciate it if it matches their taste in products. It’s a win-win in that the company could sell more products, and the customer is more satisfied with curated options that saves them time from searching for products they like. The bill’s accusation that “online platforms employ manipulative dark patterns… that create vastly different experiences for different types of users” is actually a great thing in this case — it’s the coveted “personalization” of websites that companies strive for.

But what if the recommendation engine was for a website that had user-generated content (UGC)? What if the content that had the most engagement and clickthroughs was recommended more frequently? What if that content contained some bad information? You see where I’m going with this. The algorithms that companies use may not have been originally intended to be harmful or malicious. In this case, we’ve simply come to a point where companies can ruthlessly pursue user engagement regardless of who it benefits. Meanwhile, data scientists, business analysts, and product managers are just trying to do their job in creating the best model that will result in the highest customer engagement or serve the most relevant ads or create a more addictive game to maximize profit for the business. Who is responsible for asking if these actions are the best for society at large? Conversations around ethics in technology usually come last instead of first1, but this bill could prompt us from the start to ask ourselves who is going to benefit from the tech that is being developed.

Potential negative impacts

The bill uses sweeping adjectives when describing tech, such as “harmful algorithms” and “manipulative dark patterns” and that kind of terminology is biased language that may influence readers who aren’t familiar with algorithms to believe that they are all bad or “dark”. The bill reads differently without those descriptors (ie. “online platforms employ patterns”) which helps to see the flipside of such a bill and to ask what are the potential negative implications of algorithmic transparency?

Transparency is a double-edged sword. As Chamber of Progress founder said in a statement:

“There’s some danger that fully lifting the hood on tech algorithms could provide a road map for hackers, Russian trolls, and conspiracy theorists.”

CEO Adam Kovacevich

This is true because if companies are transparent about how they algorithmically ban bad actors from their site, those with malicious intent have an easier way to get around such rules. Take Cloudflare as another example. Cloudflare is a Content Delivery Network (CDN) that many websites rely on to protect them from certain kinds of online attacks (ie. DDoS, content-scraping, etc.). If Cloudflare made transparent their algorithm for how they detect bots, then those bad actors would be able to leverage this for ill intent. Would companies with “good” intentions or services have exemptions from this bill? How do you define what is good?

Furthermore, algorithms can be considered a “trade secret” to a company’s success2. A company could learn from or copy the algorithm for their recommendation engine from a more successful company. That might not necessarily be a bad thing, but since this bill would only apply to the U.S., would companies outside of the U.S. now have a competitive advantage by “taking a look under the hood” of big tech in the U.S.?

The bill also mentions content moderation. We’re pushing the boundaries of what is considered harmful or inappropriate, if it’s ethical to allow it to remain on a platform, and who is responsible for its exposure and amplification. Unlike traditional media such as radio and television where content in the U.S. is moderated by the FCC, because ownership of the internet is decentralized, anyone can create content and it is up to the platforms to choose to moderate indecent content. Also unlike traditional media, internet content can be generated anonymously, lending itself to the most vile types of material that platforms must face and moderate3. Revealing the algorithms behind how a company determines despicable content can backfire by being a boon to those who wish to circumvent these rules. So yes, “online platforms constantly engage in content moderation decision making, resulting in highly influential outcomes regarding what content is visible and accessible to users,” and for good reason. Also, because we’re not at a point where content moderation is completely automated, platforms with UGC also provide content creators their own tools to ban or block others from interacting with them. Without it, trolls could abuse the platform and constantly harass people, for example. So content moderation seems to serve a good purpose, but who decides if it’s fair and how is that determined?

Open questions

One of the first thoughts I had about this bill that prompted me to explore its implications further is the notion of making algorithms transparent. If an algorithm that was implemented was simply a decision tree or a series of conditionals (ie. if x then y), then the choices that led to the decisions are obvious. However, because models like neural networks are a black box, we can’t glean insights as to how a decision was made (although there is a desire to interpret this with “concept whitening” for image recognition). How will companies be accountable for decisions they’re unable to explain?

Furthermore, I think it will be relatively easy to determine actual “harmful” algorithms, but more difficult to determine if a seemingly neutral algorithm has harmful consequences. Afterall, who is to say what is “harmful”? If enabling addictive doomscrolling is detrimental to one’s mental health but the intended purpose of a platform was to create the most engaging news feed, would that be a harmful algorithm or just an effective one? It is both.

Lastly, the definition for “online platform” seems to be very broad yet does not seem to include older culprits of systemic bias.

The term ‘‘online platform’’ means any public-facing website, online service, online application, or mobile application which is operated for commercial purposes and provides a community forum for user generated content, including a social network site, content aggregation service, or service for sharing videos, images, games, audio files, or other content.

Algorithmic Justice and Online Platform Transparency Act

Does it not apply to credit card companies, banks, or higher education institutions? They have certainly used algorithms unfairly before any of these other modern tech companies.

Heading in the right direction

This bill is trying to tackle many things under one umbrella that are loosely related.

“One possible mechanism to ensure an ethical balance is assessing the gain of those utilizing Big Data against those being influenced, ensuring that any individual affected ultimately ends up with improved well-being or other critical outcomes.”

Policy and population behavior in the age of Big Data

Perhaps this is a step in making sure online platforms are more conscious of the algorithms they are creating and its broader impact to society, before actually deploying it to production. More importantly, with this bill, the platform must provide “a description of how the type of algorithmic process was tested for accuracy, fairness, bias, and discrimination.”

Overall, I look forward to seeing where this legislation ends up and I hope to see roles such as tech ethicists and responsible AI specialists to become more prevalent among online platforms.


These are just my personal opinions and I am not a lawyer nor a politician; I am just raising concerns and questions from what I know about tech and predictive analytics. If you have some answers or more thoughts on this, post them down below or reply to me on Twitter @rayanastanek.

Comments

comments