|VOLUME 1|ISSUE 2|APRIL 2018|ISSN: 2581-3595|



Beware, for I am fearless and therefore powerful” – Frankenstein.


God created mankind and mankind, machines. Historically, the fast paced development of life in general in all aspects has been attributed to the contrivance of machines and computers that enabled us to overreach our mental and physical incapacities and undoubtedly made the luxury of comfort publically accessible. However, there are no actions sans consequences, the machines and computers brought on rising unemployment, global warming, virtual existence as opposed to the real one, morally disputed future generations, eroded innocence amongst children due to unvetted exposure to all kinds of information online, cyber-crimes, this list is by no means exhaustive. We are learning new horrors and trends of technology on a daily basis but the balance of convenience still seems to favor it. The disarming nature of technology and rampantly rising cases of its abuse, call for urgent and stringent legal protection of fundamental rights of its users in cyberspace. In numerous ways, the information technology industry has begun to actively invest in the creation of artificial intelligence systems (AI) at an unprecedented scale. The most pressing issue with this is the obscure nature in which these systems work which in extreme circumstances could lead to a denial of legal and human rights. In this paper, I will focus on the issue of accountability of these systems as an essential ingredient to safeguard our future from this novel legal problem.


In Frankenstein by Mary Shelley, Dr. Frankenstein invented a monster that he couldn’t control. The scary analogy to draw here is with regards to these Artificial Intelligence Systems developed by humans. There is a dire need for crisis management in this sphere, which means that we should be prepared to avert the disaster before the event of its occurrence, variably, diffuse the bomb before it is set off. This requires an intensive understanding of the functioning of these systems, their potential dangers and remedy thereto. The judgment in Google v Oracle [2] delivered by Judge William Alsup serves as a hallmark for understanding the relationship between computing technology and law. Justice Aslup spent several weeks learning the programming language, Java, for understanding the effort and principles applied by Oracle in designing the language and for appreciating the technology in its actual functioning in the real world.[3] The internal workings of computing technology tend to be obfuscated, resulting in a situation where only user interaction and computer output are visible to the final user.[4]This may result in conflicts between the developer’s autonomy and the rights possessed by users.

Since a long time now, the surveillance model of transacting business over the internet has been highly lucrative, as it allows corporations to sell information of their users to maximize advertisement revenues. AI systems look to build upon this existing infrastructure, and expand the insights corporations can derive through such information.[5] With the development of hardware technology slowing down,[6]companies are focusing more on developing software technology that would maximize the utility of existing hardware and automate tasks that would otherwise require human interaction.[7] These corporations often focus on creating safeguards against the misuse of AI but in the long run this has the potential to become a question of governmental policy. This policy needs a lot of work to accommodate the emerging trends of information technology in terms of AI, including the creation of statutes for the legal protection from AI.


In 1956, John McCarthy proposed the term ‘Artificial Intelligence’ while writing a project on computing and intelligence.[8] McCarthy defines AI as “[…] the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”[9] This definition focuses on how computing systems perform ‘intelligent functions’, rather than merely imitating human intelligence. He identified an intelligence model that leverages on the strengths of the computing systems, thereby performing tasks that humans do not have mastery over.

The empirical view of AI views them as intelligent agents whose flexibility and intelligence are primarily directed by their ability to intelligently perceive the environment and to react to the changes according to their design, to proactively achieve tasks in a goal driven manner and in some cases, to interact with other agents, including humans.[10] Intelligent Agents do not need to exist in the physical world, and can function entirely in a computational environment, in which case they are referred to as the “infobots”.[11]

Most AI systems that are designed today, including speech recognition systems and natural language processors, collect large amounts of user data and repeatedly use that information to improve the results of their processing.[12] Intelligent agents need to go a step further and necessarily have knowledge of previous experiences and the ability to create goals, value certain outcomes and be mindful of their environment.


AI systems do not always work in a predictable fashion. Even though the developers ought to ensure that the systems would function exactly the way they intended, in the practical working of it, they are unable to monitor all aspects of the system. A common concern with them is that they may go ‘rogue’ and cause damage.[13] A recent example could be the AI System ‘Tay’ designed by Microsoft as a ‘Chatbot’, it was programmed to learn from the responses of people interacting with it and to create an engaging environment for dialogue. However, the responses provided by the people ultimately resulted in the system becoming a racist Nazi-loving sympathizer.[14] Undoubtedly, this was not the result intended by Microsoft and they quickly rolled back the system while issuing and official apology. This highlights the vulnerability of AI systems, in this case it could be rolled back before much damage, and in others it might not be possible to do the same. Thus, suitable precautions must be taken.

‘Siri’, a voice controlled system designed by Apple, that possesses the capability to access and manage certain capabilities of the device on which it is installed, in this situation is limited to the programming provided by the developers. For instance, the system has the capability to dial-up emergency services if called upon, at the same time, the system also has the capability to accept an alternative name to be called upon, other than Siri. It was found that these two commands could effectively call the application ‘ambulance’, thereby overriding the emergency capabilities of the system.[15]

As AI systems are being used increasingly, authors often cite contractual relationships between AI systems and users, which generally represents the relationship between two entities having legal personality. However, in law, AI systems or for that matter, any computing system does not have any legal personality, thereby creating a situation where these AI systems do not have a legal standing in creating contractual relationships.[16] For example, under the Indian Contract Act, 1872, any person who is not explicitly barred from contracting is allowed to do so, irrespective of whether he/she is a natural or a legal person. No such recognition has been created in relation to AI systems or any computing system. Therefore, by inference, these AI systems would not technically be able to enter into legally binding relationship.

The internal workings of these systems are often termed as ‘Black Box’ which means even the developers sometimes fail to understand why the system did what it did. This shifts the problem onto the user who is only concerned with the output. Thus, it becomes extremely dangerous to not know how the system is actually working. In a recent case of Facebook, chatbots created by it started conversing in their own language that was not understood by the developers. It was soon rolled back. Although this appears to be a huge leap for AI, several experts including Professor Stephen Hawking have raised fears that humans, who are limited by slow biological evolution, could be superseded by AI. Others like Tesla’s Elon Musk, philanthropist Bill Gates and ex-Apple founder Steve Wozniak have also expressed their concerns about where the AI technology was heading.


The creation of AI systems will have an impact on the taxation structure as it would take over the labor market by providing highly competitive services.[17] Especially in countries like India that have a huge labor force, replacement of human labor with AI systems can serve as a blow to the taxation revenue of the state.[18] Additionally, with the advent of self-driving cars would allow users to avoid parking fees. In India, the Government has issued rules taxing digital goods and services that are hosted outside of India.[19] If left unchecked, the AI systems could adversely impact the taxation revenue of states. Subsequently, it would lead to rise in unemployment as the labor would be rendered redundant without any alternative jobs. Policymakers need to prepare for this in advance.

Technology could also be used to suppress the vote of certain sections of the society by targeting them with ‘robo-calls’.[20] Social media networking sites such as Facebook could be misused by creating rigged questionnaires to tilt the masses in one direction or another, the AI systems used by Facebook and Google often cannot differentiate between fake and real news articles, thereby promoting unverified ones. These harmful effects were recently observed in the controversial 2016 Presidential Elections of the USA.

The privacy of the users is also threatened by the AI systems. Many AI Systems nowadays tend to process data of users and use this data to understand the usage patterns of that individual. This information is often sold to third parties, which helps in funding of companies that follow the surveillance model of business.[21] This is also coupled with third parties entering sensitive data of users into the databases of AI systems for the purpose of wider comparison. For example, the application ‘FindFace’ in Russia was criticized for allowing users to upload images in order to determine the identity of the individual in the image. Similarly, Truecaller requires a user to login to the website using their phones or email services, which subsequently rummages through their contacts and catalogues the contact information in the database so that users in future can identify unknown numbers that call them.[22]This is clear infringement of the right to privacy but cases often suffer due to the question of accountability or lack thereof.

India does not have an official policy to tackle cyber security vulnerabilities in attacks similar to the Defense Advanced Research Projects Agency (‘DARPA’) program in the USA. The use of AI systems could put the security information of thousands of users at risk, as AI systems are predicted to have the ability to crack encryption using algorithms generated through analysis and interpretation, as opposed to brute force hacking that in most computer systems would take impossibly long periods of time.[23] It is also said that the AI systems would have the ability to mimick the credentials of new users. India lacks the necessary infrastructure and yet has made provisions for mandatory AADHAR cards for all, which has rightly been put on hold by the Hon’ble Supreme Court. To show the incapacity of our cyber security policy, France hacked into our central system within minutes.


This is an unchartered territory as there is no legal precedent to rely on. The above-stated concerns call for transparency and accountability of AI systems to safeguard the rights of their users. Now that we have established that there is no doubt as far as the need for accountability of these systems goes, another question that is born of this is that in what capacity should they be made accountable or liable, as they have no legal personality. AI systems could either have their own legal identity or they can serve as agents of the company or individual who creates or utilizes them.

The interaction between computing systems and humans involves the interaction of a number of real world facilities such that it is impossible to ignore the importance of the question of legal capacity.[24]Some scholars try to understand such systems in relation to corporations, which have been granted legal personhood by the fiction created by law. Their main argument is that it would provide a new means of communal or economic interaction, along with shielding individual users from liability.[25] They argue that such entities have an actual place of residency, cyberspace, and like corporations, they may come into existence or cease to exist.[26]The counter argument to this is that corporations are legal persons by means of some statutory enactment, it is very rare to enjoy legal personality without there being a law in place before its coming into existence.[27]

Other scholars have argued that from a contractual point of view, it becomes necessary to have a legal capacity and the AI systems’ legal capacity can arise from law of agency. The AI system could act as an implied agent of the company or the corporation. [28] This view holds merit, in my opinion, as any action taken by the AI system would be in pursuance of its design which itself reflects the actions intended by its Principal that is the company. This would help in construing the consequent liabilities.

The developers and testers should take a stakeholder-centric view when it comes to the AI systems, they must focus on the safety of the ultimate users. The tests should also be done by third party as the developer or any person directly associated with building of the system would more likely be liable to miss out on his/her critical analysis. When it comes to jurisdiction, the court within whose territorial limits the injured user resides should have jurisdiction over the matter, regardless of where the data is stored or the cause of action arose, as ultimately it is all to protect and safeguard the rights of the user. India should take a cue from Australia in this regard. A recommendation, the Unified Privacy Principle 11, roughly provides that any corporation that runs for the information of an Australian citizen outside of its borders is responsible for that information as though it is being handled in accordance with Australian laws.[29] Corporations generally believe that the country where the data is stored should have jurisdiction. This rule has been majorly misused in the past. For instance, during the National Security Authority (‘NSA’) leaks, the US Authorities failed to secure data retrieved by them from corporations like Google and Facebook and stored on US servers. The leaked data also included data belonging to Indian users. Since the corporations were incorporated in the US and were acting within the legal limits of that country, they could not be charged by the Indian Authorities.[30] Such lacunae in the law must be filled in order to avoid making the same mistakes in the future.


The rapid proliferation of AI systems in all spheres of the society is highly tenable in the near future, hopefully after resolving its threatening issues and reducing the risk factor to a minimum. In all probability, it will be rampant. However, we can think ahead and brace ourselves by strengthening and bullet-proofing our governmental policies and statutory provisions. Additionally, new laws must be enacted, keeping in mind the changing demands of the society. Presently, India lacks the necessary infrastructure to protect its citizens from potential cyber-crimes and cyber- frauds, yet is running the race to bring in the technology from the West as evident in the recent public sentencing to mandatory ADHAAR biometrics verification and linking to bank accounts in the garb of welfare policies. We need to learn to walk before we can run. Realistically speaking, we are barely crawling.

The Judiciary has a major role to play as the savior of the populace, thus, it must create and develop awareness when it comes to technology. If used intelligently, it could also someday help lessen the docket. The safeguards must be in place as the AI systems can potentially prove to be detrimental to the Fundamental Rights (Majorly Articles 14,20,21, Indian Constitution), legal as well as human rights of the citizens. Cyber cells should be easily accessible to one and all. The R&D of the country must be provided with funds by the Government to develop cyber security policies and laws on equal footing with National security laws especially with cyber terrorism on the rise.

The AI systems must be made accountable, if not as an individual legal personality, then as an agent thereby making their principal which could be either a developer or a company, strictly liable in case the systems go ‘rogue’. There must be mandatory provisions in statutes for third party testing of these systems to be conducted by their company or developers. The courts must be well-equipped with the means for conducting electronic discovery in cases whose outcome largely depends on them. Alternative jobs must be created for those with the potential of being adversely affected with the advent of AI systems. Laws must be amended to ensure the protection of its citizen regardless of where the data is stored. AI systems are on the rise because they avoid human error and have the ability to adapt quickly to the changing circumstances, the intent is noble but road to hell was paved with good intentions. The challenge here, is to strike the right balance between Frankenstein’s monster and the kind, helpful humanesque robot named Vicky in the popular American sitcom named Small Wonder.


[2] Google v. Oracle, No. 13-1021 (Fed Cir 2014).

[3] Dan Farber, Judge William Alsup: Master of the Court and Java, CNET, May 31, 2012.

[4] Frankenstein’s paperclips, The Economist, July 1, 2016 (‘Black Box’), 14-15.

[5] Bruce Schneier, Surveillance as a Business Model, November 25, 2013.

[6] Tom Simonite, Moore’s Law is Dead, Now what? , MIT TECHNOLOGY REVIEW, May 13, 2016.

[7] Nathan Benaich, Investing in Artificial Intelligence, TechCrunch, December 25, 2015.

[8] Negnevitsky, ARTIFICIAL INTELLIGENCE: A guide to intelligent systems 2 (2002), 6-7.

[9] John McCarthy, What is Artificial Intelligence? , November 12, 2007.

[10] Don Gilbert, Intelligent Agents: The Right Information at the Right Time, IBM Intelligent Agent White Paper, 1-9 (1997).

[11] Poole & Alan K. Mackworth, Artificial Intelligence Foundations of Computational Agents 11 (2010).

[12] Robert McMillan, Apple Finally Reveals How Long Siri Keeps Your Data, Wired, April 4, 2013.

[13] Stanford University, Artificial Intelligence and Life in 2030, 12 (2016) (‘AI Report’), 1-3.

[14] Peter Lee, Learning From Tay’s Introduction, March 25, 2016.

[15] Will Knight, Tougher Turing Test Exposes Chatbots’ Stupidity, MIT Technology Review, July 14, 2016.

[16] Sameer Chopra & Laurence White, Artificial Agents-Personhood in Law and Philosophy, PROCEEDINGS OF THE 16th EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE 1-3 (2004).

[17] AI Report, supra note 12, 45-47.

[18] Tax Revenue, 2015-2016.

[19] Service Tax (Fourth Amendment) Rules, 2016.

[20] Supra note 12,45-47.

[21] AI-One, The current state of cyber security is fundamentally flawed, October 18, 2011.

[22] Keith Andere, Truecaller!Houw it works and what you need to know about it, June 13,2015.

[23] Sabastian Anthony, Researchers Crack The World’s Toughest Encryption By Listening To The Tiny Sounds Made By Your Computer’s CPU, EXTREME TECH, December 18, 2013.

[24] Curtus E.A. Karnow, The Encrypted Self: Fleshing Out the Rights of Electronic Personalities (1994), 3-4.

[25] Id.

[26] Id.

[27] Supra note 15, 363-367.

[28] Id.

[29] Dan Jerker B. Svantesson, Privacy, Internet and Transborder Data Flows: An Australian Perspective, 4 MASARYK U.J.L & TECH. 2, 7 (2010).

[30] Centre for Internet and Society, Internet Privacy in India.