Journal of Legal, Ethical and Regulatory Issues (Print ISSN: 1544-0036; Online ISSN: 1544-0044)

Research Article: 2023 Vol: 26 Issue: 3

Aspects of Artificial Intelligence on E-Justice and Personal Data Limitations

Fotios Spyropoulos, University of West Attica

Evangelia Androulaki, University of West Attica

Citation Information: Spyropoulos, F., & Androulaki, E. (2023). Aspects of artificial intelligence on e- justice and personal data limitations. Journal of Legal, Ethical and Regulatory Issues, 26(3), 1-8.

Abstract

The article deals with the evolving applications of artificial intelligence on judicial systems, starting from the basic premise of data availability. The importance of open data policies in the development of predictive models of justice is highlighted, and methods to protect the privacy of parties and their advantages in improving the administration of justice, as well as their technical shortcomings, are examined. The protection of sensitive information from specific risks and methods for their concealment are addressed with reference to European statutes. An analysis follows of certain AI applications in justice (predictive justice, electronic dispute resolution) and a list of methods on which the relevant software is based. The issue of stepping over the limits in the use of technology for solving substantial legal issues is also raised. The range of applications of AI and related tools in criminal justice is a chapter in itself, discussing their potential role in adjusting sentencing and their practical utility in the prosecution of crime. Particular emphasis is paid to how the use of algorithmic variables can affect a subject of criminal proceedings, often in a discriminatory manner. Finally, the importance of data processing by artificial intelligence is underlined and the rights reserved by the subjects over their personal information are emphasized.

Keywords

Artificial Intelligence, E-Justice, Personal Data, Technoethics.

Introduction

The issue of the use of artificial intelligence in judicial systems is a highly topical subject which poses modern concerns at the level of technoethics. This is why it becomes the subject of research. In particular, the use of artificial intelligence in justice was examined in the context of a special online research conducted in April 2018 between representatives of member states participating in European Commission for the Efficiency of Justice (CEPEJ) and the society of citizens (pointed out here equally as an element of the research that the degree of response was relatively low, not permitting the ascertainment of clear tendencies). The categorization was conducted based on the provided services. Indicatively, the basic categories of application of artificial intelligence are the following: case law search machines, online dispute resolution, assistance in preparing draft decisions, analysis (predictive, scales), categorization of contracts according to different criteria and tracking of deviate or non-compatible clauses, “Chatbots” (chatting with user applications) for the update of litigant parties or their support in court proceedings.

Implementing AI in Justice Administration

Requisites for the development of artificial intelligence in the administration of justice: The first substantial condition for the development of artificial intelligence in the field of justice is the availability of data. The more data available, the more the artificial intelligence upgrades models that improve the ability of prediction. Consequently, a policy of open data for court decisions constitutes a prerequisite for the work of legal technology companies which specialize in search machines or trend analysis (predictive justice) (European Commission for the Efficiency of Justice, 2018).

As a concept, open data refer to the spreading of “unrefined” data in structural computer data bases. These data concentrated in all or in part with other structural sources, form what we call big data.

The question of anonymization or pseudonymization of these data within the European regulatory context of data protection has been of great concern to the Member States of the Council of Europe. A survey has shown that twenty-three countries have declared that they would proceed to pseudonymisation in at least in some categories of litigation (e.g., personal status, marital status) by deleting data that would render the litigant parties or witnesses identifiable (full names, addresses, telephone numbers, identification numbers, bank account numbers, tax identification numbers, health status, etc.) (Directive 2013/37/ΕΕ & Directive 2003/98/EC).

The advantages of this publication are several and concern the better knowledge of the judicial work and the trends in case law, the quality improvement of a judicial system aware of being surveyed and the creation of an entirely new base of actual data (European Commission for the Efficiency of Justice, 2018). Several, nonetheless, are the concerns about this publication, mainly at technical level. For instance, the selection of court decisions eligible for publishing is not necessarily well organized among the courts of all instances: certain applications used by courts of Europe have not been designed for this purpose, especially in reference to first instance decisions and some countries will have to establish new procedures of collecting decisions, if they mean it to have an exhaustive character. Equally, there has not been designed so far, a fully efficient automated mechanism that would prevent any risk of identifying or re-identifying the subjects.

In digital era, in order to achieve a balance between the need to publicize court decisions and respect the fundamental rights of the litigants or witnesses, their names and addresses must not be disclosed in the decisions that are being published, taking, more particularly, into consideration the risk of abuse and re-use of the said personal information, as well as the particularly sensitive nature of the data the decisions may contain CELEX: 51998DC0585 (European Commission, 1999). The automated process can be exploited systematically to conceal similar information. The sensitive character of certain personal data requires particular attention, as provided in article 6 of Convention 108. This applies to data which reveal national or racial origin, political convictions, participation in trade union organizations, religious or other convictions, bodily or mental health or sex life, which are considered sensitive information. The court decisions may contain several other personal data, which also fall under the category of sensitive personal data. Courts dealing with criminal cases are particularly concerned with another category of sensitive data. These are data regarding criminal procedure and criminal convictions. All these sensitive data must be treated with proper attention. Their massive spreading would pose serious risks of discriminating treatment, “profiling” and violation of human dignity (GDPR-Recital 71).

Artificial Intelligence and Systems of Predictive Justice

Another practical application of artificial intelligence in the field of justice, are the systems of predictive justice which are designed to be used by legal services, insurers (both for their internal needs and for their insured clients), as well as by lawyers, so that they may be able to predict the outcome of a judicial dispute. In the form of a graphic representation, they offer the possibilities of success in respect of the outcome of a dispute, based on criteria which are inserted by the user (specific for each kind of dispute). The reasoning in the function of predictive justice related software is fundamentally based on methods which are either generative, usually called Bayesian, or discriminative, attempting to calculate the current or future width of the values of a variable (e.g., the outcome of a trial) by the analysis of previous examples (Aletras et al., 2016). At the present phase of evolution of machine learning techniques, the deduction of reliable results as to the “prediction” of court decisions is not possible. At any rate, the application of these techniques in civil, commercial and administrative disputes must be taken into consideration for the creation of scales or for the online extrajudicial dispute resolution, if there is subsequently a possibility to appeal to the judge. Apparently, the basic question arising from such use of artificial intelligence is not so much whether it is beneficial or harmful, desirable or not, but whether the suggested algorithms can achieve the kind of result sought after. Irrespective of the software quality that is submitted to testing, the prediction of the judge’s decision in civil, commercial and administrative disputes could be a desirable benefit, although sometimes for very different reasons, both for those responsible for public justice policies and those who exercise legal professions in the private sector (Jean, 2016).

Online Dispute Resolution

Yet another application of artificial intelligence in the field of Justice is the online dispute resolution. All European courts face, to a small or large degree, repeated civil cases of small pecuniary object. The idea of facilitating their processing through computer science and/or their assignment to other bodies (not courts of law) is widely spread. Great Britain, the Netherlands and Latvia are examples of countries having already implemented or being about to implement these more or less automated solutions. The object of these services of online dispute resolution (ODR) seems to have been expanded gradually. These services were converted from exclusively electronic services of dispute resolution to measures of alternative dispute resolution before the judicial appeal reaches the court and are already being incorporated in the judicial procedure with the result that there are now electronic judicial services. By evolving this model, the applicants will be able to resort to an automated prediction, by casting a series of questions, which the program will process and arrive at proposals for the resolution of the dispute. A typical example of hybrid dispute resolution is the program of the Cyber Justice Laboratory of Montreal which integrates all pre-judicial and judicial stages into a single digital procedure of dispute resolution. Some writers maintain that the wide spreading of these dispute resolution methods is a new form of manifestation of digital “solutionism”, namely the systematic use of technologies in the effort of resolving problems which are not necessarily within their faculty (Morozov, 2014). In the European Union has recently been established a protective regulatory framework which is binding for member states: article 22 of the Data Safety Monitoring Plan clearly provides that persons concerned can deny a decision which is exclusively based on automated processing, with certain exceptions.

The online dispute resolution offers knowledge accumulated from previous judicial procedures. Its role is to provide services in the context of extrajudicial settlement, mediation and arbitration. These services can also be used in judicial procedures under the supervision of judges, prior to their deciding the outcome of the dispute on the merits of the case (for certain disputes this phase is considered mandatory). On the other hand, the actual contribution of artificial intelligence should be evaluated and whether it will solely be used for the designation of indicative scales or for finding a solution, as well. In any case, it should be possible to combine these systems of artificial intelligence with the demands for transparency, neutrality and honesty (Pavillon, 2018).

Criminal Justice and Artificial Intelligence

More particular concerns arise in the field of criminal justice, mainly due to the peculiarity it presents. The tools of criminal justice should be designed according to the fundamental principles of redemption/rehabilitation, including the role of the judge in customizing the sentence based on objective elements of personality (training, work, medical insurance and social welfare), without any other form of analysis, save for the one conducted by specially trained professionals, such as social reintegration workers. The Big Data Analytics could be used by these professionals to centrally collect information concerning a person accused of a crime, to then be stored by several institutions and services and subsequently examined by a judge, sometimes within a very short period of time (e.g., in the context of expediting the trial procedures).

In general, a large number of electronic tools are widely used for the prevention of criminal acts (by locating the possible places where they may be committed or the possible perpetrators) or for the more efficient prosecution of the perpetrators (Završnik, 2017). The first category includes tools of “preventive policing” which contribute to the prevention of specific kinds of crimes that occur in a regular basis, such as breaking and entering, street violence, stealing from cars or motor vehicle thefts. Choosing these tools is connected with their ability to accurately define the place and time these crimes will be committed and to reproduce this information on a geographical map in the form of “hot spots”, which are surveyed in real time by police patrols. This procedure is called “crime preventive mapping”.

Moreover, the Big Data analytics apply all the more to the prosecution of crime. Tools like Connect, which is used by the police of the United Kingdom for the analysis of billions of data produced by economic transactions to find correlations or patterns of activities, or even the International Child Sexual Exploitation Database (ICSE DB), administered by Interpol and assisting in the identification of victims and/or perpetrators through the analysis of furniture, for instance, and other objects in abuse images, or through the analysis of background noise heard in a video, have proved particularly effective in combating crime. With Connect, for instance, searches that in the past required months of investigation can now be completed within a few minutes, with a very high level of complexity and volume of data.

Contrary to the police authorities, the use of “preventive” tools (Završnik, 2019; Marjanovic et al., 2022; Keddell, 2019; Morison & Harkens, 2019; Schlicker et al., 2021; Sela, 2018; Papagianneas, 2022; O'Malley, 2010; Lee & McGovern, 2013; Groff & Mazerolle, 2008) by judges in criminal trials is very rare in Europe. HART (Harm Assessment Risk Tool) developed in cooperation with Cambridge University and is presently under testing in the United Kingdom. This technology, which is based on machine learning, was trained by the use of the Durham police records from 2008 to 2012. By learning from these decisions of police officers during the above time period and by taking into account whether some perpetrators repeated their delinquent behaviour, the machine is expected to be in a position to assess the risk, low, medium or high, of delinquents relapsing, some of which are not associated with the crime that is committed (e.g., post code and sex) (Barnes et al., 2018). At this experimental stage, ΗΑRT has a purely consultative value for the judge.

In contrast with Europe, the United States of America already use similar software, one of which is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which aims at assessing the risk of reoffending. However, following a conducted research, it resulted that the algorithms used by it may lead to discriminatory treatment on the part of a judge when imposing the sentence. This algorithm, which was developed by a private company and the use of which is mandatory for judges in certain federal states of America, includes 137 questions that are answered either by the accused or based on information extracted from their criminal record. The questions present quite a wide variety and include the existence of telephone connection at home, the difficulty in paying bills, the previous life of the members of the family, the criminal history of the accused etc. The algorithm grades a person on a scale from 1, (low risk), to 10 (high risk). It is an aid in reaching decisions, as its conclusions is only one of the parameters a judge examines when deciding on a sentence. The research showed that African- American individuals were attributed a degree of relapsing twice as high as other groups of the population, within a period of time of two years from imposition of the sentence- without this result having been intended by the designers. On the contrary, the algorithm considered other groups of the population as much less likely to repeat an offence (Skeem & Eno-Louden, 2007; Brennan et al., 2009; Herrschaft, 2014).

The use of algorithmic variables, like the criminal record and family background, means that the past conduct of a specific group may determine the fate of an individual, who, of course, is a unique human being with specific social background, training, skills, degree of guilt and specific motives that drove them to commit a crime (Završnik, 2017). In criminal cases there is also the chance of discriminatory treatment, given that these tools, which are manufactured and interpreted by humans, may replicate unjustifiable and already existing inequalities in a particular system of criminal justice: Instead of correcting certain problematic policies, technology may end up legitimizing them. In addition, the lack of transparency during the development of algorithms by the companies and the absence of accountability to the public, give rise to concerns, especially in view of the fact that in order for the data to become available to the public, decisions of state authorities also intervene. In this context, there is a difference between Europe and the United States as to the right of access to algorithms: while in the United States judicial authorities are still unwilling to fully acknowledge this right and proceed to weigh between private interests (particularly the protection of intellectual property) and the right of defence, in Europe, the context is more protective owing to the General Data Protection Regulation, which fortifies the right of information in reference to the subjective reasoning of decisions reached by the use of algorithms.

ChatGPT: Artificial Intelligence in Justice

ChatGPT is a revolutionary AI language model developed by OpenAI that is primarily designed to mimic patterns of human conversation (Metz & Weise, 2023). It is so powerful that it can process large volumes of data and documents quickly and accurately, which can help streamline judicial decision-making, identify patterns and connections that may go unnoticed by humans, and undoubtedly reduce costs by automating tasks and processes (Rupert, 2023; Perlman, 2023; Iu & Wong, 2022).

Some courts as we already see above, have controversially already begun using automated decision-making tools in determining sentencing or whether criminal defendants are released on bail. In fact, China has already deployed AI systems to reduce the workload of judges by searching court cases for relevant references, making recommendations on laws and regulations, drafting documents and correcting errors in verdicts. It is therefore foreseeable that the integration of AI systems into the judicial system will lead to a significant increase in efficiency in the future. In Argentina, the use of AI technologies in the legal sector is increasing, with a focus on the use of chatbots such as ChatGPT. Indeed, in Colombia, for the first time, a legal decision was registered that was made using an AI text generator. This is apparently the first time a judge has admitted to having done so, taking as a basis that other such programmes could be useful to facilitate the drafting of texts but not with the aim of replacing judges. The judge argued that ChatGPT takes over services previously provided by a secretary "in an organised, simple and structured manner" that could "improve response times" in the justice system (Taylor, 2023; Janus, 2023).

The use of these systems in courts has been heavily criticised by AI ethicists. They point out that they regularly reinforce racist and sexist stereotypes and reinforce already existing forms of inequality (Janus, 2023). Interestingly, according to an article in the Guardian, the chatbot itself was rather apprehensive about its new role in the justice system.

Judges should not use ChatGPT when ruling on legal cases … It is not a substitute for the knowledge, expertise and judgment of a human judge (Taylor, 2023).”

The study by Iu & Wong (2023), pointed out that ChatGPT has advanced legal drafting skills for various types of documents, including demand letters, letters without prejudice, pleadings, and can identify legal strategies, draft summary judgement, frame argument, crossexamine, and provide basic legal advice. However, there are limitations and it is important to bear in mind that ChatGPT is not a tool designed specifically for the legal industry.

Moreover, the use of AI in the legal industry will undoubtedly give rise to a variety of legal concerns, including issues of unauthorised professional practise and potential misuse of and over-reliance on the information generated by these types of tools. Like any AI system, ChatGPT can be biased or make mistakes in its responses. It is important to consider the potential ethical implications of using such a system and to take steps to mitigate any possible negative consequences following the words of Octavio Tejeiro, judge of the Supreme Court of Colombia that:

Epimeter–Personal Data Protection Limitations

The principle of legality during the process of personal data and the obligation to erase or minimize the consequences of the process of data to the rights and fundamental freedoms of individuals, mandate the management, a priori, of the related risks by providing for the application of suitable measures, particularly by design and in the context of basic settings (by default), in order to limit the risks. As personal data must only be processed for specific and lawful purposes, they should not be used in a way that is not compatible with these purposes and they should not be further processed in a way that the subject of the data considers to be unexpected, unsuitable or questionable (principle of honesty). The issue of re-using personal data, which would be widely accessible, must therefore be treated with the greatest care. The designing of methods for the process of data used by algorithms must minimize the presence of unnecessary or marginal data, prevent any covert partial factors, as well as any risk of discriminatory treatment or negative effect to the fundamental rights and freedoms of the data subjects. When artificial intelligence is used, the rights of data subjects have particular importance. So, the control anyone must have over their personal information results to the following rights: the right of data subjects not to be subject to automated decisions influencing them significantly without having their opinion taken into consideration, the right to be informed of the reasoning based on which the process of data is conducted by the algorithms, the right to object to such process and the right of judicial protection.

References

Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., & Lampos, V. (2016). Predicting judicial decisions of the European Court of Human Rights: A natural language processing perspective. Peer Journal of Computer Science, 2, e93.

Indexed at, Google Scholar, Cross Ref

Barnes, G., Sherman, L., & Urwin, S. (2018). Needles and haystacks: AI in criminology. In Research Horizons. Pioneering research from the University of Cambridge, 35. 32-33.

Brennan, T., Dieterich, B., Breitenbach, M., & Mattson, B. (2009). A response to assessment of evidence on the quality of the correctional offender management profiling for alternative sanctions. Traverse City, MI: Northpointe Institute for Public Management.

Indexed at, Google Scholar

European Commission for the Efficiency of Justice. (2018).European ethical charter on the use of artificial intelligence in judicial systems and their environment. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment.

Indexed at

European Commission. (1999). Public sector information: A key resource for Europe-Green paper on public sector information in the information society. Brussels.

Indexed at

Groff, E., & Mazerolle, L. (2008). Simulated experiments and their potential role in criminology and criminal justice. Journal of Experimental Criminology, 4, 187-193.

Indexed at, Google Scholar, Cross Ref

Herrschaft, B.A. (2014). Evaluating the reliability and validity of the correctional offender management profiling for alternative sanctions tool. Implications for community corrections policy. Doctoral dissertation, Rutgers University-Graduate School-Newark.

Indexed at, Google Scholar, Cross Ref

Iu, K.Y., & Wong, V.M.Y. (2023). ChatGPT by OpenAI: The end of litigation lawyers?

Indexed at, Google Scholar, Cross Ref

Janus, R. (2023). A judge just used ChatGpt to make a court decision.

Jean, J.P. (2016). Thinking about the purposes of the necessary opening of case law databases. Presented to the convention that took place in the French Court of Cassation.

Keddell, E. (2019). Algorithmic justice in child protection: Statistical fairness, social justice and the implications for practice. Social Sciences, 8(10), 281.

Indexed at, Google Scholar, Cross Ref

Lee, M., & McGovern, A. (2013). Procedural justice and simulated policing: The medium and the message. Journal of Policing, Intelligence and Counter Terrorism, 8(2), 166-183.

Indexed at, Google Scholar, Cross Ref

Marjanovic, O., Cecez-Kecmanovic, D., & Vidgen, R. (2022). Theorising algorithmic justice. European Journal of Information Systems, 31(3), 269-287.

Indexed at, Google Scholar, Cross Ref

Metz, C., & Weise, K. (2023). Microsoft bets big on the creator of ChatGPT in race to dominate AI. The New York Times.

Google Scholar

Morison, J., & Harkens, A. (2019). Re-engineering justice? Robot judges, computerised courts and (semi) automated legal decision-making. Legal Studies, 39(4), 618-635.

Indexed at, Google Scholar, Cross Ref

Morozov, E. (2014). To solve everything, click here: The aberration of technological solutionism.

O'Malley, P. (2010). Simulated justice: Risk, money and telemetric policing. The British Journal of Criminology, 50(5), 795-807.

Indexed at, Google Scholar, Cross Ref

Papagianneas, S. (2022). Towards smarter and fairer justice? A review of the Chinese scholarship on building smart courts and automating justice. Journal of Current Chinese Affairs, 51(2), 327-347.

Indexed at, Google Scholar, Cross Ref

Pavillon, C. (2018). Concerns about popular digital judge.

Perlman, A. (2022). The implications of ChatGPT for legal services and society.

Indexed at, Google Scholar, Cross Ref

Rupert, M.D. (2023). ChatGPT & Generative AI systems as quasi-expert legal advice lawyers-Case study considering potential appeal against conviction of Tom Hayes.

Indexed at, Google Scholar, Cross Ref

Schlicker, N., Langer, M., Ötting, S.K., Baum, K., König, C.J., Wallach, D. (2021). What to expect from opening up black boxes? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122, 106837.

Indexed at, Google Scholar, Cross Ref

Sela, A. (2018). Can computers be fair: How automated and human-powered online dispute resolution affect procedural justice in mediation and arbitration. Ohio State Journal on Dispute Resolution, 33(1), 91-148.

Indexed at, Google Scholar

Skeem, J., & Eno-Louden, J. (2007). Assessment of evidence on the quality of the correctional offender management profiling for alternative sanctions. Unpublished report prepared for the California Department of Corrections and Rehabilitation.

Indexed at, Google Scholar

Taylor, L. (2023). Colombian judge says he used ChatGPT in ruling. The Guardian.

Završnik, A. (2017). Big data, crime and social control. London, New York: Routledge, Taylor & Francis Group.

Indexed at, Google Scholar, Cross Ref

Završnik, Α. (2019). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology, 18(5).

Indexed at, Google Scholar, Cross Ref

Received: 24-Feb-2023, Manuscript No. JLERI-22-13261; Editor assigned: 27-Feb-2023, PreQC No. JLERI-22-13261(PQ); Reviewed: 10- Mar-2023, QC No. JLERI-22-13261; Revised: 24-Mar-2023, Manuscript No. JLERI-22-13261(R); Published: 31-Mar-2023

Get the App