Research Article: 2025 Vol: 28 Issue: 1
Abraham Ethan Martupa Sahat Marune, Faculty of Law Universitas Pelita Harapan, Jakarta, Indonesia
Citation Information: Marune, A.E.M.S. (2025). The urgency of regulating artificial intelligence in networked societies: Imperatives for sustainable governance. Journal of Legal, Ethical and Regulatory Issues, 28(1), 1-6.
The rapid development and massive use of artificial intelligence (AI) technology offers opportunities and challenges in realizing digital culture and ensuring the development of security. This study emphasizes the urgency to schedule comprehensive regulations governing the use of AI to address ethical, social, and security issues. This study is doctrinal by using normative juridical research methods that examine secondary data. The results of the study show that currently the status quo of responsibility for AI's actions can be the responsibility of the person or legal entity above it based on the Vicarious Liability doctrine. The massive use of AI at this time should be able to encourage the Indonesian government to immediately regulate the use of AI in a networked society for sustainability.
Artificial intelligence, Legal Responsibility, Networked Society.
Technological developments that occur provide convenience to human life, including the use of Artificial Intelligence (AI). The use of AI in various computer software helps humans to do their jobs automatically. The actions "performed" by AI do not always provide positive benefits and impacts. It is not uncommon to find cases of AI "performing" actions that are not in accordance with orders or can even harm other parties. Actions that violate ethics and the rule of law should be accounted for, especially if they cause harm to other parties. Unfortunately, positive law in Indonesia does not recognize AI as a legal subject (Fournier-Tombs, 2021).
The massive use of AI in the online business sector today without further regulation can cause chaos in society. Such as several cases of violations of user privacy and data leaks to Marketplace users that have occurred several times recently. Inappropriate AI actions can be one of the causes. This can occur due to errors in the programming in the system used by the AI. In addition, there are also other problems such as inaccurate information or instructions provided by AI which lead to consumer losses that can also occur in the system (Schier et al., 2021).
The need for this regulation is also in line with the state's obligation to provide protection for its citizens, as stipulated in the fourth paragraph of the opening of the 1945 Constitution of the Republic of Indonesia. This regulation is also needed to provide legal certainty and provide a sense of security for the community as an object of AI according to the guarantee in the body of the 1945 Constitution of the Republic of Indonesia as well. The existence of these legal arrangements is also in line with legal development as a means to build society or law as a tool of social engineering. The existence of these norms is expected to direct human activities in the desired direction of development and renewal (Alegre, 2021).
Based on the above background, the issues to be examined are as follows:
1. How is the status quo of AI's responsibility as a legal subject based on Indonesian positive law?
2. What is the urgency and the ius constituendum of establishing regulations on the utilization of AI in realizing a networked society for sustainability?
This study aims to:
1. Know and understand AI's responsibility as a legal subject based on Indonesian positive law.
2. Know and understand the responsibility of AI according to the regulations of countries in the world.
The specifications of this study are 'doctrinal' by using normative juridical research methods. This type of research is legal research that analyzes the relationship between a law as norms that become a reference in behaving and an inventory of positive law (Budianto, 2020). This research is a normative juridical research that examines secondary data, so the data is obtained through library research techniques. The secondary data used are primary legal materials such as related laws and regulations, and secondary legal materials in the form of previous research, papers and online articles related to the research theme. The analytical method used in analyzing the data obtained uses a descriptive-qualitative analysis method which analyzes data sourced from legal material in the form of regular and coherent sentences (Negara, 2023).
Status Quo of Legal Liability of Artificial Intelligence (AI)
AI's "actions" that violate ethics and the rule of law should be held accountable, especially if they cause harm to other parties. However, Indonesian positive law does not recognize AI as a legal subject. The massive use of AI in today's online business sector without further regulation can cause chaos in society.
The current regulations only regulate people and legal entities as two legal subjects that are legally recognized under Indonesian law, and do not define AI as a legal subject, so that the burden of responsibility recognized in Indonesian law is only on the two the subject of the law until now (Santo, 2012). However, the existing legal doctrine explains that AI's actions can still be accounted for. In this case, the doctrine of Vicarious Liability can be applied. This doctrine basically states that other people can be responsible for the actions or mistakes committed by other people (or other entities). Alternate liability is a form of secondary or indirect obligation that is imposed when the parties have a certain relationship.
At least, there are two things that determine the existence of Vicarious Liability. First, there is a special relationship between superiors and subordinates so that unlawful acts committed by subordinates must be related to the job. Second, the act must occur within the scope of carrying out the work. This allows the company as the employer of its employees or subordinates to still be responsible for errors and omissions or acts against the law that cause harm to other people.
Alternate liability can be used to deal with acts or actions of AI that cause harm or violate the law. The Civil Code stipulates that an employer or employer is responsible for losses caused by the actions of the people they are responsible for or by the goods under them. This responsibility also applies to those who represent the business of the employer (or become proxies), with the exception that this responsibility ends in the event that the employer's parents, guardians, school teachers and chief craftsmen prove that they cannot prevent acts for which they should be held accountable.
Even though according to the law AI is not a worker who can be classified as a legal subject, AI can still be classified as an employee because they do the jobs ordered by the company. It is clear that the concept of AI-as-Tools or AI as a tool determines the company as a replacement person in charge (Witro et al., 2021). The company acts as a substitute responsible person as a result of AI not being classified as any legal subject, whether a person or a legal entity, so that those who can be held accountable for AI's actions are which person or legal entity provided input data and knowledge, gave orders to AI, or the person or legal entity on whose behalf the AI is acting, regardless of whether such action is planned or envisaged.
According to the Civil Code, the relationship between AI and its organizers can be likened to the relationship between a pet and its owner. This is because in AI that is autonomous, it is necessary to input data and programs first to be able to work, so that it is under the supervision of the administrator/system owner. So that if AI's actions harm other people, then the organizers can be held accountable (Puteri et al., 2020).
The doctrine of surrogate liability can also be applied in the realm of criminal law. Initially, this doctrine only applied in the civil sphere, especially in the law of compensation (Tort Law) due to an act that was against the law or caused damage, although in its application in the field there were still differences of opinion among experts (Ravizki & Yudhantaka, 2022).
This is due to the absence of accommodation in the provisions of substitute liability in the current provisions of positive criminal law, either expressly or impliedly, and neither in practice. The application of this doctrine is still debatable based on the reason that substitute accountability is contrary to the principle of Actus Non Facit Reum Nisi Mens Sist Rea or there is no crime without fault. The error in question refers to a psychological (inner) state and a certain relationship between the inner state and the actions committed. Another reason is that it is contrary to the criminal principle of "Geen Straf Zonder Schuld", which means that there is no punishment without guilt, in which mistakes include an element of intent and negligence (Harasimiuk & Braun, 2021).
If in the event of a criminal act by AI, the element of Actus Reus (action) has basically been fulfilled. However, the Mens Rea element (error) is a point that is difficult to determine in AI. This is because there is no awareness and inner state to judge the good and bad of something like humans. The inner state referred to cannot be known, because AI is not a person (human) even though it has human-like abilities. However, technically the AI system has the ability to analyze and make the right decisions after first entering data. This may indicate that there is an element of Mens Rea in the criminal act committed by AI (Fordyce, 2021).
In addition, the use of AI with the concept of AI-as-Tools by companies can be the basis for the application of this principle. AI, which is used as a tool or means for companies to carry out their activities, provides a basis for substitute liability for companies for any actions of AI that violate criminal law provisions. Moreover, AI is not identified as a legal subject recognized by criminal law, namely people and legal entities, and this reason becomes the basis for alternative liability by the company.
The absence of regulation regarding AI's responsibilities and its determination as a legal subject from a criminal law perspective can be considered for progressive legal developments in the future, given the massive development of technology and information that allows everything to be done through the intermediary of AI. The urgency of the need for this regulation is in line with the "Legal Theory of Development" put forward by Mochtar Kusumaatmadja, which also states that the function of law is to act as a driving force for development: law can bring society to a more advanced direction (Aulianisa & Indirwan, 2020). Development in question is development in the broadest sense covering all aspects of people's lives, not only in terms of economic life. The essence of development in question is change, so that law cannot be understood as a static element that is always behind the change itself, but the law must be at the front guarding the change. Law is not only a follower but must play a role as the prime mover of development.
Regulators and policy makers must adapt laws to the technological advances brought about by the Industrial Revolution 4.0. So that with these conditions can fulfill a sense of justice and guarantee legal certainty in society. Without legal arrangements that adapt to the times, advances in technology and information can create massive disruptions to human life.
The Urgency and Ius Constituendum of Artificial Intelligence (AI) Regulation
The use of AI has great potential in realizing digital culture and strengthening security development. However, the rapid development of AI also poses challenges and risks that need to be addressed through the establishment of proper regulations (Prianto et al., 2020). Regulation is needed to address the various ethical, social, and security issues that arise with the development of AI technology.
In the context of networked society and sustainability, clear and directed regulations will provide guidance for industry players and AI users in ensuring the responsible use of this technology. These regulations may include guidelines regarding data use, privacy protection, and avoidance of bias in AI algorithms. With good regulations, AI developers and users will have clear guidelines in ensuring the use of this technology complies with applicable ethical values and norms (Lukitasari et al., 2022).
Regulating AI will provide a solid foundation for innovation and investment in AI. Countries that have clear and supportive regulations will be more attractive to companies and investors to operate and invest in (Darmawan, 2022). Good AI governance at the national level is also important in the context of international cooperation. Countries can share best practices, standards, and frameworks for governing the use of AI. Through collaboration, countries can develop uniform or mutually recognized regulations to facilitate data exchange, AI system interoperability, and mutual protection against cybersecurity threats.
AI also has strategic implications for the national security of a country. In the context of using AI in the defense and intelligence sectors, effective AI management will help ensure that AI technologies are not misused or fall into the wrong hands. Regulations that take into account security aspects will help protect the country from cyberattacks, foreign surveillance, and threats to critical infrastructure. Countries need to have appropriate AI arrangements in place to protect national interests and ensure technological independence. Good regulation can help manage technology transfer, protect data, and address the risk of unwanted dependence on foreign countries or companies. AI settings must also take into account aspects of universal ethics and values. In a global context, it is important to promote the use of AI that respects human rights, prevents discrimination and advances social justice (Johnson, 2019).
In establishing comprehensive and effective regulations related to the use of AI, there are several important aspects that need attention:
First, regulations must be able to accommodate the rapid development of AI technology. AI technology continues to evolve rapidly, so regulations must be designed with flexibility to keep up with the latest developments. Regulations that are too rigid or out of date can become barriers to innovation and the development of useful AI. Therefore, regulations need to provide space for adaptation and adjustment to the rapid development of AI technology.
Second, regulations must prioritize ethical values in the use of AI. The application of AI can have a significant impact on society and individuals. Therefore, regulations must pay attention to ethical values which include principles such as fairness, transparency, accountability, privacy, and diversity. Ethical regulations will help prevent abuse of AI, reduce algorithm bias, and protect individual rights. Ethical thinking in regulation is also important to build public trust in AI technology.
Third, the active participation of various stakeholders is key in establishing effective regulations. The formation of regulations cannot be carried out in isolation by the government or certain institutions, but must involve industry players, academics, civil society organizations and the general public at large. Involving various parties will help gain diverse perspectives and strengthen the legitimacy of the resulting regulations. In addition, active participation can also facilitate the collection of relevant information and input to inform better policies and regulations.
In implementing norm setting regarding AI in Indonesia, some examples of good regulation from certain countries can be considered. Some examples of setting AI norms that can be used as references include (Cath et al., 2018):
1. European Union (EU): The European Union has implemented the General Data Protection Regulation (GDPR) which protects personal data of individuals and provides guidance regarding the use of data in the context of AI. This arrangement emphasizes privacy protection and fairness in processing AI data, as well as providing individual rights regarding their personal data.
2. United States (US): In the US, the National Institute of Standards and Technology (NIST) has released a framework called "Ethical Principles and Practices for AI".This framework highlights ethical principles that need to be considered in the development and use of AI, such as fairness, diversity, accountability and transparency.
3. Canada: The Government of Canada has issued a "Directive on Automated Decision-Making" governing the use of automated decisions in public services. This arrangement emphasizes the importance of transparency, explanation, and accountability in AI systems that affect the lives of individuals.
4. Singapore: Singapore has adopted a “Model AI Governance Framework” which highlights the principles of responsible use of AI, such as fairness, transparency, accountability and sustainability. This framework provides guidance to organizations and governments to manage AI risks effectively.
5. Germany: Germany has issued the "Ethics Guidelines for Trustworthy AI" which sets out ethical principles in the use of trustworthy AI. This guide covers values such as privacy protection, fairness, sustainability, and transparency in the use of AI.
Norm settings from these countries can be used as a reference for the development of AI regulations in Indonesia. However, it is also important to consider the unique Indonesian context and needs, as well as involve the participation of various stakeholders to ensure that setting AI norms is in accordance with local conditions and values.
Artificial Intelligence (AI) besides having extraordinary potential to do good things, can also do bad things, especially for things that cannot be anticipated. AI's "performed" legal actions should be accountable. The non-recognition of AI as a legal subject according to Indonesian positive law raises new problems. The doctrine of vicarious liability, which regulates liability by other parties who do not commit acts or mistakes, forms the basis for AI's accountability. So that AI's responsibility for its legal actions lies with the organizer or the party that employs the AI.
The establishment of appropriate regulations regarding the use of AI in realizing a networked society and sustainability is an urgent need. Comprehensive and balanced regulation will provide clear guidelines for the ethical use of AI, protect individual rights, and encourage responsible innovation. With good regulations, it is hoped that the development of AI can make a positive contribution to digital culture and the development of security in an increasingly digitally connected era. Norm settings from these countries can be used as a reference for the development of AI regulations in Indonesia. However, it is also important to consider the unique Indonesian context and needs, as well as involve the participation of various stakeholders to ensure that setting AI norms is in accordance with local conditions and values.
Alegre, S. (2021). Regulating around freedom in the “forum internum”. In Era Forum (Vol. 21, No. 4, pp. 591-604). Berlin/Heidelberg: Springer Berlin Heidelberg.
Aulianisa, S. S., & Indirwan, I. (2020). Critical Review of the Urgency of Strengthening the Implementation of Cyber Security and Resilience in Indonesia. Lex Scientia Law Review, 4(1), 31-45.
Indexed at, Google Scholar, Cross Ref
Budianto, A. (2020). Legal research methodology reposition in research on social science. International Journal of Criminology and Sociology, 9, 1339-1346.
Indexed at, Google Scholar, Cross Ref
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Science and engineering ethics, 24, 505-528.
Darmawan, Y. (2022). Development Legal Theory and Progressive Legal Theory: A Review, in Indonesia’s Contemporary Legal Reform. Peradaban Journal of Law and Society, 1(1).
Indexed at, Google Scholar, Cross Ref
Fordyce, A. J. (2021). Toward Ethical Applications of Artificial Intelligence: Understanding Current Uses of Facial Recognition Technology and Advancing Bias Mitigation Strategies. Georgetown University.
Fournier-Tombs, E. (2021). Towards a United Nations internal regulation for artificial intelligence. Big Data & Society, 8(2), 20539517211039493.
Indexed at, Google Scholar, Cross Ref
Harasimiuk, D., & Braun, T. (2021). Regulating artificial intelligence: binary ethics and the law. Routledge.
Johnson, J. (2019). Artificial intelligence & future warfare: implications for international security. Defense & Security Analysis, 35(2), 147-169.
Lukitasari, D., Hartiwiningsih, H., & Wiwoho, J. (2022). Strengthening the Use of Artificial Intelligence Through Sustainable Economic Law Development in the Digital Era. In International Conference for Democracy and National Resilience 2022 (ICDNR 2022) (pp. 218-223).
Negara, T. A. S. (2023). Normative legal research in Indonesia: Its originis and approaches. Audito Comparative Law Journal (ACLJ), 4(1), 1-9.
Indexed at, Google Scholar, Cross Ref
Prianto, Y., Sasmita, P. Y., & Mourin, A. A. (2020). The Urgency of Government Regulations in Substitute of Law About the Drone Usage Regulation. In The 2nd Tarumanagara International Conference on the Applications of Social Sciences and Humanities (TICASH 2020) (pp. 15-18).
Puteri, R. P., Junaidi, M., & Arifin, Z. (2020). Reorientasi sanksi pidana dalam pertanggungjawaban Korporasi di Indonesia. Jurnal USM Law Review, 3(1), 98-111.
Ravizki, E. N., & Yudhantaka, L. (2022). Artificial Intelligence Sebagai Subjek Hukum: Tinjauan Konseptual dan Tantangan Pengaturan di Indonesia. Notaire, 5(3).
Santo, P. A. F. D. (2012). Tinjauan Tentang Subjek Hukum Korporasi dan Formulasi Pertanggungjawaban Dalam Tindak Pidana. Humaniora, 3(2), 422-437.
Schier, A. D. C. R., Maksym, C. B. R., & Mota, V. D. (2021). The urgency of regulating and promoting artificial intelligence in the light of the precautionary principle and sustainable development: A urgência da regulação e do fomento da inteligência artificial à luz do princípio da precaução e do desenvolvimento sustentável. International Journal of Digital Law, 2(3), 133-152.
Witro, D., Rasidin, M., & Nurjaman, M. I. (2021). Subjek Hukum dan Objek Hukum: Sebuah Tinjauan Hukum Islam, Pidana dan Perdata. Asy Syar'iyyah: Jurnal Ilmu Syari'ah Dan Perbankan Islam, 6(1), 43-64.
Received: 01-Nov-2024 Manuscript No. JLERI-24-15478; Editor assigned: 02-Nov-2024 Pre QC No. JLERI-24-15478(PQ); Reviewed: 16-Nov-2024 QC No. JLERI-24-15478; Revised: 21-Nov-2024 Manuscript No. JLERI-24-15478(R); Published: 28-Nov-2024