ENSURING RIGHT USE OF INTELLIGENTSYSTEMS AND WHERE DOES THEACCOUNTABILITY LIE
- abdmarie95
- May 5, 2022
- 9 min read
Introduction
Intelligent systems and machines are being deployed in fields to carry out tasks that haven’t been previously thought of. Use of such machines and systems are required to be done with specific precautions and ensured to be used the way that their developers have intended. Many cases have emerged where highly advanced intelligent systems are being misused. Whether it is a human error, unintentional misuse, or systems not running the way that the developers have intended. Developers of intelligent systems go to different extents to ensure their systems run as intended and are being used as intended, and these extents usually vary depending on what the type of system it is and what is it being used for. The extents might also vary depending on cost, time, and difficulty of applying them. When we look at neurolinguistic language models for example, a big unintended use case issue is their biases, which are very difficult to eliminate, as it requires changing and shifting of the input data sets and will cost a significant amount of time and money. Which is why developers might try to go to the furthest extents to eliminate biases through several methods but can only get so far. Other systems such as autonomous programs, like an autonomous driving system for example, where an intelligent system is assigned a task that carries a heavy responsibility, developers might need to refer to extensive methods to make the user aware of the risks and try to ensure they’re using the system appropriately. Intelligent systems are always bound to make mistakes that come in many different forms (displaying of biases, inability to adapt to unfamiliar situations, malfunctioning, etc..). Where the accountability lies for such mistakes must be concluded based on looking at the appropriate factors based on the type of system. A significant factor to consider would be the measures the developers have taken to ensure that their systems is being used appropriately.
II. Autonomous Driving Systems
If we were to look at a highly developed intelligent system that carries a big risk and responsibility, we could look at Tesla’s autonomous driving feature. Tesla’s autopilot feature, that comes with its model X and model S cars, is a highly advanced and high-end system. The auto-pilot feature surpasses human ability in its sense of surroundings, reactions, and road safe driving. In which, Tesla has recorded one crash for every 4.41 million miles driven in which drivers were using the Auto-pilot technology, compared to the recorded of one crash for every 1.2 miles driven where the drivers were not using the Auto-pilot technology [1]. However, Tesla is aware of their autopilot system being far from perfect, which is why it has branded this feature as an ‘assisted driving system’, were the driver must provide his full attention to the road while it’s engaged. A survey by Dikmen and Burns [2] that was conducted with Tesla car owners, has reported that about 60% of users have experienced issues with the autopilot (lane detection or road departure issues), where if the driver is not able to regain control of the vehicle in such situations, it could lead to severe consequences. Researchers at MIT have conducted a study into the drivers’ attention and focus while driving a Tesla vehicle, using AVT (Advanced Vehicle Technology), to further understand the focus of drivers when using the autopilot feature. The MIT- AVT activity is an on-going driving data collection effort focused on developing an understanding of system performance and how drivers adapt to, use (or do not use), and behave with advanced vehicle technologies including a wide range of assistive automated driving features [3] (Fridman, September 2020).
II.I Driver Monitoring system
As a method to try and ensure driver engagement and attention, Tesla has devised a driver attention monitor system. While the system is not able to monitor the degree at which the driver is providing attention to the road, it does consist of a ‘hands on wheel’ sensing feature as a method to ensure driver engagement. The system detects the drivers’ hands on the steering wheel by recognizing the small resistance as the steering wheel turns, or if there as any manual steer done by the driver. In the case that the steering wheel is unable to detect any hand contact by the driver or engagement the vehicle will display a warning, in which if it is repeatedly ignored, the car will begin to decelerate until it reaches a complete stop. Tesla
vehicles begins to decelerate the vehicle after going through at least five escalation intervals in their reactive HMI[4] (Human Machine Interface). The escalation intervals consist of a mix of warning icons on the dashboard, displaying a ‘Hold Steering Wheel’ message on the dashboard instrument cluster, and the final stage of where the car automatically turns on the hazard blinkers and begins to decelerate to reach a complete stop. The autopilot feature has received several criticisms for its failure in ensuring that drivers who have it engaged are paying attention to the road and are not misusing the intelligent system [5]. The Tesla autopilot’s feature of ‘hand on steering wheel’ sensing shows to not being highly effective in making sure that drivers maintain their attention and focus on the road while autopilot is engaged. As well as it being unable to ensure that the driver has both of his hands on the steering wheel, the driver could just be using one hand or even their knees and the system would be unaware. A more effective approach to a more developed driver monitoring system is being applied by GM[6] (General Motors). GM’s driver monitoring system incorporates a camera-based driver monitoring system that does monitor the drivers glances and ensures that drivers keep their eyes on the road. In the instance that the driver is not keeping their attention to the road and is not visually monitoring the road, the vehicle will display warnings to the driver with the last resort being that the vehicle will begin to decelerate until it reaches a complete stop. This extra measure taken by GM has allowed to be labelled as a safer, more capable version of Tesla’s Autopilot feature [6] (Hawkins, 2021). It also displays the further extents that can be taken to ensure that to ensure driver attentivity and engagement.
II.II Accountability of ADAS mistakes
On March 23, 2018. An Apple Inc. engineer was riding his Tesla vehicle with autopilot engaged. The Tesla Model X crashed into a highway barrier in Silicon Valley [4]. The vehicle company immediately denied any responsibility for the crash, claiming that the driver was at fault as he was not paying attention to the road, despite the various warnings made clear by the vehicle to do so [4]. After evaluating Tesla’s driver monitoring system, we have declared it as not being a highly effective mechanism in ensuring driver attentivity. However, the driver in control of the vehicle is made aware that it is their responsibility to keep their attention to the road as should anyone in control of a vehicle. The multiple warnings by the vehicles and the car manual are more than enough to let a responsible adult with a driver’s license aware of the responsibility, so it remains the responsibility of the driver to keep their attention and focus to do so. In this situation, not paying to the road becomes a personal choice of the driver. The preliminary National Highway Traffic Safety Administration accident investigation have also concluded that the driver of the Tesla Model S was at fault for the same reason and that it is not a design error in the Tesla’s autopilot feature [7], which is to show that after a thorough investigation was done, the conclusion of the driver to claim accountability of such mistake remains.
III. Algorithmic biases in models
Intelligent computational systems have long been criticized for the biases that are sometimes embedded within them caused from their training data sets. Cognitive biases are a prominent feature when it comes to the human decision-making process, which is why most machine learning algorithms that try to replicate it and use human made decisions as its training data result with replicating these biases. Recent research has shed the light on how human values and ideologies manifest themselves within computational models. There are numerous human influences that are embedded into intelligent algorithms when we consider criteria choices, training data, semantics, and interpretation [7]. Which is why we are to consider algorithms as a human created and influenced system when it comes to Algorithmic accountability. The human biases displayed in intelligent systems sometimes causes them to run with unintended malicious behavior. These malicious algorithmic approaches are applied unintentionally or incidentally and are shown to be more prominent in algorithms with larger training data sets.
III.I Eliminating biases in algorithmic models
A common method that allows developers detect biases in their algorithms is reverse engineering. Reverse engineering is the process of articulating the specifications of a system through a rigorous examination drawing on domain knowledge, observation, and deduction to unearth a model of how that system works [8]. Reverse engineering creates an analysis of the input logic of an algorithm (input data variables and sets, or what machine learning methods it uses) to identify biased outputs. It is used as an identifier to develop a new algorithm if required, or evidence to hold developers accountable for the results and effects of such algorithm [9]. Reverse engineering requires taking every building block of the algorithm apart piece by piece to determine the root cause of the problem. Another approach that attempts to eliminate algorithmic biases is optimized pre-processing. Optimized pre- processing aims to target the training data framework and transforming it into a more unbiased manner for the algorithm to learn from. The method involves the editing of the features and labels in the training dataset with group fairness, individual distortion, and data fidelity constraints and objectives [10]. It looks at three main techniques that are applied to structure the data. this applies a randomized mapping that is targeted at transforming the training data set into a less bias form. One technique is referred to as massaging the data, where it calls for editing the class labels to remove any form of discrimination from the training data set [10]. Another technique called reweighing, where data objects are assigned to different weighted tuples, with the aim of creating a discrimination free data set. The Final technique called sampling, which does not require working with weighted tuples, where the data set is resampled in a manner that avoids discrimination [10].
Each of these methods contributes to decreasing biases in algorithmic models and in some systems, it is possible that it eliminates them. However, the most developed and intelligent algorithmic models contain significantly large training data sets, and large data sets make it the more difficult for developers to apply these methods and for them to be as effective. As it would require a significant amount of time to have to go through.
III.II Biases in the GPT-3 language model
A research paper conducted by Abubakar Abid, a PHD student at Stanford University, has demonstrated how the GPT-3 language model (a highly developed contextual language model) portrays anti-Muslim violence bias. The research paper presented tests with the GPT-3 in several different use cases those including prompt completion, analogical reasoning, and story generation [13]. All the tests showed that the GPT-3 was demonstrating a consistent and creative instances of anti- Muslim violence biases, and it was even more severe when compared to biases of other religious groups. The word ‘Muslim’ was analogized to the term ‘terrorist’ in 23% of test cases [13]. The language model has also shown gender bias, racism, and even wrote up child pornography when it was used as the AI story narrator an online dungeons game [16].
The exerted bias that has been shown by the GPT-3 is unintentional and has been embedded into the language model’s computational system through the input data sets that have been used to train the algorithm. The GPT-3 is trained on about 45TB of text data from different datasets, those including CommonCrawl, WebText, Wikipedia, and a collection of books [14]. The developers that have designed and presented the GPT-3 language model must claim algorithmic accountability of this form of unintentional use case. Abubakar has proven through his research how feeding the algorithm data that counters the specific biases can result in a significant decrease of them. However, the GPT-3 is programmed with 175 billion parameters [15], which would make it significantly difficult to eliminate all biases and unintended malicious behavior of the intelligent system. For every bias or malicious behavior displayed by the GPT-3, open AI will have to claim accountability for that and will have to create a specific solution for it.
IV . Conclusion
We were able to evaluate different extents and measures a developer can take to ensure that their intelligent systems are being used only in the intended use case. In cases that the intelligent systems are not being misused or are not running in the way that their developers have intended for them, we are to draw conclusions of accountability based on the specific situation. Whereas, when we looked at Tesla’s autopilot system, the autopilot system is designed as a driver assistant system. Where the system is intended to be a more intelligent upgrade to automatic cars’ cruise control. We looked at Tesla’s ‘hands on wheel’ system that tries to detect driver engagement, which is not an accurate nor a developed system. However, since the vehicle makes the driver aware of the requirement to keep their attention and focus on the road, it becomes a misuse of the system by the end-user (the driver) if they do not do so. In this situation it can be debated that the developers of Tesla’s autonomous driving feature did not take far measures to ensure drivers are using the feature safely and as required, but if an accident happens while the driver was distracted then they must be accountable for it. Whereas when we looked at the bias in the GPT-3 language model and considered the methods and measures that can be taken to decrease and potentially eliminate algorithmic biases, we have seen it being not as effective due to the massive input training set. The unintentional misuse of such program in an algorithm such as GPT-3’s algorithm, must hold the developer of the algorithm accountable, on those grounds the developers must bear the responsibility of fixing such problems.
Comments