- Chaitanya Gupta
CREATING ‘SPACE’ FOR ARTIFICIAL INTELLIGENCE: Analysing the Status and Liability of Artificial Intel
This piece has been written by Chaitanya Gupta, a fourth-year law student at Jindal Global Law School, O.P. Jindal Global University.

Introduction
The European Space Agency’s (ESA) programmes seeking to integrate Artificial Intelligence (AI) into space programmes and expand the usage of AI in space have allowed for better exploration of the Moon, Mars, and planetary defence missions. Systems such as the HERA, and the development of ‘ESA_Lab@DFKI’, a technology transfer lab between ESA and the German Research Center for Artificial Intelligence (DFKI) provide for more satellite autonomy and collision avoidance using AI. Despite, AI’s use being limited to transmitting and processing data, or simple tasks requiring human intelligence input, it was used in a path-breaking manner aboard NASA’s 2016 Curiosity Mars Rover to enhance the way in which Mars was explored.
Additionally, there has been great innovation in the private sector to increasingly employ AI as a tool for space applications. AIKO is one such private software allowing spacecrafts to autonomously re-plan, detect events and react. The rampant privatisation and commercialisation of space have resulted in increased investments in ‘NewSpace’ innovations that allow for AI assistance in spacecraft docking, collision avoidance, etc.
This increased use of AI is a product of its several advantages—(i) AI makes decisions quickly without human input; (ii) it allows for safer space exploration; and (iii) it has uses in extreme situations where humans or astronauts may not be able to act and as such allows for more exhaustive and deep-space exploration. Examples of the same can be seen in AI programmes like ‘Cimon’ and ‘Robonaut’, that take on actions too risky for astronauts, and reduce their stress while performing tasks, making space exploration safer for them. AI has the ability to process data faster and take action, it can recognise system or sensory failures or threats before humans and inform them, etc. However, there are two major concerns, that the domain of international space law does not contain adequate regulatory mechanisms to address such rapid scientific development, and that space exploration via humans or technology remains very dangerous. It is thus the aim of this article to identify the problems with defining AI in space and what aspects would need to be covered, or require consensus, when the international space law regime has to be updated vis-à-vis the rights and liabilities of AI.
Definitions of AI
The first concern materialises from the difficulty in defining AI in order to have a regulatory framework. So far, there is no commonly accepted definition of AI in the international legal scheme, scholars merely agree on some of its common characteristics. As such, one can obtain a starting point for discourse from the UNESCO framework on AI, according to which AI technology is, “a machine capable of imitating or even exceeding human cognitive capacities, including sensing, language interaction, reasoning and analysis, problem-solving, and even creativity.”
In most other definitions of AI, the words ‘intelligence’ or ‘learning’ do not appear very often. This is because modern AI is not ‘intelligent’; it can merely arrive at reasonable outcomes that can be associated with average intelligence among people. Although the European Commission attempted to draft a comprehensive definition for AI, even their definition in the ‘AI Act’ runs into a similar set of problems. The Commission defines AI as, “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” Much like UNESCO’s definition, this too is very broad. Thus, without a consensus on the definition of AI, it becomes difficult to effectively legislate on it.
The Legal Personality of AI
Within this scope, two common approaches to equate AI with a legal personality are used. To award AI with legal personality, the common approach is to understand the juncture at which the machine has achieved autonomy in thinking. For the same, the ‘Turing Test’, or the ‘Imitation Game’ is usually employed. Alan Turing devised the same to ascertain when the machine becomes ‘intelligent’. However, until recently, the question of making AI a legal person was not considered, and usually, the programmer was held liable for its actions. This approach is now being questioned,[1] and even the European Parliament made recommendations in 2017 to the European Commission to determine the legal status of autonomous machines.
Now, the discourse suggests that if AI can perform autonomous actions, then its personality can be considered alongside that of an animal vis-à-vis law. Or, where an autonomous machine, empowered with AI, can perform actions similar to that of the human mind, where such actions go beyond the scope of its written algorithm, such a machine, or ’electronic person’ can be made the subject of law. It will be made a subject by bestowing it with rights and obligations. These approaches can accord liability to AI on earth, but its scope of liability in space remains uncertain.
The one problem that persists is AI’s ability to produce outcomes based on data through autonomous processes, despite them not resembling any human cognitive processes. As such, AI has not reached a level where it can be equated with the personality of a human, for want of emotion, discretion and consciousness. It requires human input, which again begs the question, who is liable for the actions of AI? Is it the programmer, the operator or the machine itself?
Liability Regime in Space Law
Here, we approach the second concern, viz., according liability to the acts of AI in space. The damages and liability regime described in Articles VI and VII Outer Space Treaty 1967 (OST), and Articles II and III in the Liability Convention 1972 have a limited purview. Per this regime, Article VI OST puts the responsibility on the states upon the commission of an internationally wrongful act, and Article VII OST puts international responsibility on the launching state, i.e., “a state that launches or procures the launching of an object in outer space, or from whose territory the said object was launched.” The provision outlined in Article VII OST is detailed in Article II of the 1972 Convention. Accordingly, a system of absolute liability is created for damage occurring in space or on Earth, by a space object. The only criterion to be ascertained thereon is the site of damage, whether it is the Earth or space.
An important aspect is that this liability regime only accounts for states and not non-state actors. Therefore, the development and increased use of AI can create problems with more complex activities taking place in space, creating questions of liability for damages.
The Liability Regime and AI
The two primary problems with the liability regime qua AI per the 1972 Convention are with respect to the interpretation of the terms fault and persons in Article III. It is very difficult to establish ‘fault’ under Article III, insofar as the Liability Convention has never been used in collision cases.[2] More importantly, it becomes increasingly difficult to attribute fault using the ‘due care’ standard in the case of developing technologies. This is because the threshold of determining whether an action of the AI was completely autonomous, or driven by human input becomes very unclear and hard to determine.
Further, the use of ‘persons’ in the Liability Convention implies the involvement of a legal or natural person with rights and obligations or states and types of activities that are covered under Article VI OST.[3] Since this liability regime envisages either the “fault of state” or the “fault of persons”, the consequences of the actions of an ‘intelligent’ space object will be difficult to be deemed a ‘fault’. This means that even holding the launching state responsible for these actions will be problematic.
Additionally, within this scheme, states have no regulations to restrict the use of artificial technologies. No specific provision can hold states accountable for the same. Even the Article VI OST exception to Article II Liability Convention’s absolute liability standard, which creates the ‘gross negligence’ standard, does not offer a system to hold AI accountable for damages. Since gross negligence, which is the corollary of the ‘standard of care’ regime, requires an assessment of the state of mind, i.e., human mental activity, this standard cannot be used vis-à-vis machines, even autonomous ones.[4]
Consequently, the international community should catalyse the drafting of national legislation that controls and monitors the use of AI in space by the State. Further, it could also form a common ground to create a global code of conduct that represents a set of ‘best practices’ for the use of AI in space.
Lastly, it becomes pertinent to observe that not all states have the capacity to undertake space activities or exploration by employing AI. Thus, there may not be an incentive for states to amend the liability regime, nor would it be fair to create a uniform standard of care for states not using AI. Here, the via media would be to develop a ‘sliding scale’ to assess the liability of states. As per this sliding scale, countries with a lower standard of development qua the use of AI in space will have a lower legal threshold to abide by; as such, the standard of care to be observed by them would be lower.
Conclusion and the way forward
In the two path-breaking works of Bratu and Freeland, and Abashidze et. al., the lex ferenda vis-à-vis the space law regime and AI has been envisioned as follows. It seeks to provide a comprehensive definition of AI technology and its role in controlling autonomous machines, and how its use of intelligence can impact its treatment as a legal personality. The drafting and ratification of an additional protocol to the Liability Convention are also essential since it could redefine the terms, ‘space object’, ‘fault’ and ‘gross negligence’ in the context of AI and other advancing technology used in space.
Given that state and non-state actors are increasingly using AI technologies to spur space activities and exploration, it becomes even more pertinent that international space law, especially with respect to the liability regime, needs to be developed.
[1] Morhat, P.M. (2017). Artificial intelligence: Legal view. Moscow: RNGO Institute of State-Confessional Relations and Law. [2] Hobe, S., Schmidt-Tedd, B., & Schrogl, K.U. (2013). Article III of the liability convention. In Cologne Commentary on Space Law. [3] Ibid. [4] Ibid.