The right to human interaction – Angelo Monoriti

Human interaction has long been considered the ‘backdrop’ to social, economic and legal relations. However, with the advent of digital technologies and, more recently, artificial intelligence, human interaction has become an ‘at-risk’ asset and, therefore, ‘visible’ as an absolute value, thus taking on the clear consistency of an interest worthy of legal protection.

But what is human interaction? Just as there is no vacuum between two neighbouring stars in a galaxy, but rather an invisible force (gravity), so too between two human beings there is not only distance, but a relational force that attracts, influences and structures them. Human interaction is this force. It is a field of mutual attraction that shapes identities. It is intimately connected to the idea of ‘relational identity’ (D. Shapiro, Negotiating the Non-negotiable: How to resolve your most emotionally charged conflicts, New York, 2016) and responds to the primary psychological need for ‘affiliation’ (see R. Fisher and D. Shapiro, Beyond Reason – Using Emotions as You Negotiate, London, 2016). 

Artificial intelligence, by its very nature, has the function of enhancing – and, in some cases, even surpassing – human beings. This enhancement, while beneficial to the individual, has an effect on relationships with others. If one party is ‘enhanced’ by artificial intelligence, the other party, which does not benefit from the same enhancement, suffers a prejudice in terms of relationships and their need for ‘affiliation’. In other words, the more a human being is enhanced or replaced, the more the other experiences a subtle but real form of frustration in their need for authentic human relationships. From the perspective of those who interact with a human being enhanced or replaced by AI, (human) interaction increasingly becomes ‘simulation’.

The relationship between AI and human interaction is therefore characterised by a tension between the expansion of the innovative potential of technology and the need to preserve the unique qualities of human relationships. It is clear, moreover, that the true potential of AI-related business lies precisely in reducing the time and cost of human interaction, replacing the difficulty and slowness of human negotiation or decision-making with the efficiency and speed of an algorithm. This dynamic therefore requires critical reflection to ensure that AI is used in a way that supports, rather than erodes, the richness and complexity of human interaction within the main mechanisms in which such interaction takes place and which are, in turn, the main mechanisms of functioning and evolution of our society: agreement and decision-making. 

Looking at things from this perspective, we will therefore find ourselves distinguishing, on a legal level, between ‘human’ decisions and algorithmic decisions on the one hand, and agreements based on human interaction and agreements based on algorithmic calculations on the other. The risk associated with decisions based on algorithmic calculations is that the ‘participation’ of the parties concerned is relegated to a mere formality, if not a ‘fiction’, i.e. a ‘fake relationship’ between the decision-maker and the party affected by the decision. The presence of a human being on the side of the Administration – which case law considers necessary to the extent that the condition of ‘imputability’ of the decision to a human being can be said to be respected (see Italian Council of State, Section VI, 04/02/2020, no. 881) – risks becoming merely formal; an abstract figure incapable of conveying the authentic meaning of the adversarial process. 

Similarly, on the agreements front, an interaction between ‘machines’ cannot be considered a ‘negotiation’ (which is such insofar as it is a ‘human interaction’) because it is a mere algorithmic calculation. This phenomenology will therefore mark a further shift forward in the frontier linked to the concept of ‘contract’. As highlighted by authoritative doctrine (N. Irti), we have already witnessed the transition from the actual agreement (in which the focus is on words and dialogue) to adherence (no longer a dialogical exchange, but two ‘unilateral decisions’ represented by the unilateral preparation of the written text of the form and unilateral adherence) to finally arrive at ‘exchanges without agreement’ (i.e., the physical exchange of goods and purchases on the internet, where words and dialogue have disappeared and ‘agreement has been reduced to the unilateral decisions of displaying and choosing’). The phenomenology of ‘algorithmic calculation agreements’ now brings us to the new frontier of ‘calculations without agreement’. 

The result, in effect, is a calculated solution, not an agreement negotiated in the human sense of the term. The greatest risk of introducing AI into negotiation processes therefore appears to be linked not so much to the quality and sustainability of the content of the agreements reached, but rather to the damage to the fundamental need for human interaction that is inherent in every individual. After all, direct contact between people is essential not only for the correctness of decisions and the quality of agreements, but also – and above all – for maintaining our identity as human beings. Indeed, as G. Cosi insightfully observed, ‘we are not born human beings, we truly become human beings through interaction with other human beings‘ (L’accordo e la decisione, Modelli culturali di gestione dei conflitti, 2017). 

There is therefore an urgent need to recognise the legal protection of every human being’s interest in being able to interact with another human being in decision-making and negotiation. In other words, it will be necessary to recognise human interaction as a legal interest worthy of protection, not only as a social value, but as a fundamental right of constitutional rankThis is the right of every individual to be ‘recognised’ and ‘heard’ by another human being (and not by a machine) and, therefore, not to be deprived of humanity in the relationship. It is a fundamental human right aimed at protecting the human core of the relationship against algorithmic substitution. 

In fact, on the one hand, there is no explicit provision concerning the right to human interaction in the Italian Constitution and, on the other hand, the rationale behind the provisions dealing with the ‘human’ presence in automated processes (see, for example, Article 22 of the European Union’s General Data Protection Regulation – ‘GDPR’ – and Article 14 of Regulation 2024/1689/EU – ‘AI Act’) appears to be limited to (real or fictitious) control over the merits of the decision, i.e. the attribution of responsibility to a ‘human’, whereas the (fundamental) need of citizens for ‘interaction’ with a ‘human’ is neglected. In other words, the provisions under consideration only require that there be ‘supervision’ of the functioning of AI by a human being, but certainly not that a ‘human’ must be ‘present’ in the process to ‘listen’ and interface with the data subject. 

It is clear, therefore, that the rules currently in force are the result of an ‘incomplete’ balance on the part of the legislator, as they are limited to the relationship between the collective interest in the development of AI and the individual interest in the fairness of automated measures; a balance from which the fundamental interest of each citizen in ‘interacting’ with a ‘human’ is therefore completely absent. Starting from this consideration, and in a legal context represented by a series of constitutional provisions that already reflect a ‘relational’ conception of the human being (Articles 2, 3, 17, 18, 24, 25, 32, 34), the basis for the right to human interaction can be unequivocally found in Article 2 of our Constitution which, as is well known, ‘recognises and guarantees the inviolable rights of the individual, both as an individual and in the social groups where human personality is expressed’. 

After all, just as the right to privacy (the right to be let alone) – now recognised and consolidated in most modern legal systems – represented a ‘legal’ response aimed at satisfying a ‘fundamental psychological need’ of human beings (D. Shapiro) – namely, that of ‘remaining separate from others’ (autonomy) – now, in the face of the intrusiveness of AI, the recognition of the right to human interaction would also allow us to ‘preserve’ the other opposite and unavoidable fundamental psychological need of all human beings: that of human connection (affiliation).

As ‘humans’, all individuals ‘approach’ and ‘distance’ themselves at the same time. Together, they ‘reject’ each other (due to the need for ‘autonomy’) and ‘attract’ each other (due to the need for ‘affiliation’) in a continuous search for their relational identity. This constant psychological-relational tension – inherent in ‘human interaction’ and which must therefore be safeguarded – was well framed by Schopenhauer who, with the wonderful metaphor of porcupines (trying to keep warm on a winter’s day), also attempted to indicate what should be the ‘optimal distance’ between human beings: ‘close enough to benefit from each other’s warmth, but far enough apart not to prick each other’. 

In an age of rapid technological innovation, therefore, recognising the right to human interaction could be the necessary guarantee to prevent technology from taking precedence over people, and could be an effective legal measure to prevent the use of AI from exceeding the limits of mutual recognition and human dignity. As Pope Leo XIV also pointed out (Message of Pope Leo XIV to participants in the second annual conference on artificial intelligence, ethics, and corporate governance), although AI can certainly provide valuable assistance to human beings, this must be on condition that it does not undermine the identity and dignity of the person . 

Angelo Monoriti is an expert in negotiation and dispute resolution in civil and commercial matters. He is adjunct professor of negotiation and head of the ADR-Negotiation Legal Clinic at the Department of Law at Luiss Guido Carli University. He is a lecturer in the Negotiation and Mediation Procedures Laboratory at the University of Rome Unitelma Sapienza. He is adjunct professor of Strategic Negotiation and Conflict Management as part of the Master’s Degree Course in Strategic Management, Innovation and Sustainability at the Faculty of Business and Management at Luiss University. He is also a lecturer in the course of Business Organisation as part of the Degree Course in Economics and Managementat Luiss University. He teaches negotiation at the Master’s Degree in Legal Advisor and Human Resources Management and the Master’s Degree in Business and Company Law at the Luiss School of Law. He also teaches at the Master’s Degree in Corporate Legal Advisor at the Luiss Business School. From 2017 to 2019, he was a teaching assistant for the “PON Global” course held in Italy by professors from the Program on Negotiation at Harvard Law School and organised in collaboration with Luiss Guido Carli and ADR Center. He is the co-author of the book NegotiACTION – Negotiation Essentials, McGrawHill, 2023. He is the author of articles and commentaries on negotiation and mediation.