Aims of the congress The Orbicom – ITI LiRiC CIAREI Congress “Communication, artificial intelligence, remediation, ethics and inclusion” aims to take stock of the opportunities and risks associated with the development of generative artificial intelligence (AI) in communication, particularly from the perspective of ethical use and inclusion, as well as remediation of societal problems. While AI in its various forms has been developed by computer scientists since the 1950s, and used in industry for several decades, its spread radically changed scale in 2022 with the arrival of ChatGPT, which provides everyone with new and exceptional capabilities (Byk 2023) for writing text, producing images and, in general, managing large quantities of data in complex ways. Numerous competitors have since entered the market, and the sector is booming thanks to investments by the GAMMAs (Google, Apple, Meta, Microsoft, Amazon) and the Chinese BATXs (Baidu, Alibaba, Tencent and Xiaomi). In most professional sectors and areas of social life, the question of the relationship between man and machine and the new modalities that will be put in place is being raised. Beyond the prophecies that announce the end of humanity (Lavazza and Vilaça 2024) or, conversely, a radiant world thanks to technology (Bittencourt et al. 2023), it is the role of the humanities and social sciences to question the consequences of the implementation of AI in our societies. The questions raised by the colloquium are cross-cutting and concern ethics as an imperative framework for the use of AI systems, and two purposes: remediation, i.e. the ability to propose solutions to individual or collective problems (Duru-Bellat 2009; Rose 2002), and inclusion, which aims to build open and accessible environments that value diversity and promote equity (Sen 2000; Tisseron and Tordo 2022). Researchers will be expected to focus on one of the following areas. Important dates
Congress: September 29, 30 and October 1, 2025 Conference venues: Strasbourg (France), Maison des sciences de l'Homme d'Alsace (Misha) and European Parliament. Fees: Price for 3 days: €190 including breaks and meals. Half-price for doctoral students. Unistra and Unesco chairs members are exempt from fees. Contact address: chaireunesco@iutrs.unistra.fr Submission topics for the congress Axis 1: Communication, AI and Ethic in education The rapid development of artificial intelligence (AI) is profoundly changing our societies, influencing various sectors, including communication and education. In this context, synergies between these fields are becoming crucial to understanding and exploiting the potential offered by AI. Axis 1 of this symposium aims to explore how AI can transform communication and education, and conversely, how communication and educational approaches can in turn influence the development and adoption of AI both by educational institutions and by pupils and students. Since 2022 at least, AI ethics (Zacklad and Rouvroy 2022) has been both summoned by everyone as a language fad, but also a topic that is becoming pertinent for research, particularly with regard to AI uses in education (Holmes et al. 2021) and, for example, in the case of people with disabilities (Neuralink, Seeing AI,...). There is no single AI ethic, and the questions it raises are being addressed by a variety of disciplines: information and communication sciences, education sciences, computer science, law, philosophy, psychology, etc. Axis 2: Political and organizational communication In addition to the issue of personal data protection, personal data is a necessary resource for public and political communication, both during election campaigns and in times of governance. Institutions, political players, the media and citizens can use this resource, whether or not coupled with AI, to promote their interests and messages. But risks are inherent in data protection and respect for democratic rules (Portnoff and Soupizet 2018). Advances in artificial intelligence are leading to achievements such as the recognition of people in images, the automatic creation of content, semantic analysis, etc., while exacerbating fears relating to the enslavement of humans by machines, the manipulation of behavior, mass surveillance, etc. (Villani et al. 2018). This axis of the congress questions various points: how does AI impact geopolitics and public policy strategies? What are the powers and counter-powers of AI in governance and decision support (Conseil d’état 2022)? How do data protection and the principle of truth apply, as well as issues of ethics and deontology? Between the media and the individual citizen, what is the role of AI in networks of influence (Ronzaud and Ruan 2022)? How can we build trust and enhance performance in public action? Axis 3: Media/journalism and the challenge of artificial intelligence Is AI a risk-free asset for journalists and more generally for journalism? Is the arrival of AI in newsrooms just the latest evolution in a system that has always been highly evolutionary (Béasse and Viallon 2023)? From the point of view of both news professionals and the public, can we envisage a mutual and balanced shaping of the journalist-technology relationship that builds on the specific social dimension of journalistic activity? The contributions expected in this area will focus on the concrete applications of AI in journalistic practices (Saint-Germain and White 2023) and their impact on journalists and their audiences. In particular, the aim will be to take a critical look at the prospects for AI in journalism, in light of the existential challenges currently faced by the sector. Axis 4: Environmental communication This involves questioning the ethical challenges and opportunities of AI for environmental communication. It has become a major societal issue that requires a global, cross-disciplinary approach (Libaert 2016). The use of AI must all the more allow us to question the risks of “ethical bias” linked to the data used to train the algorithms (Domenget et al. 2022), as environmental topics are particularly likely to give rise to iconic and linguistic discourses that can create a distorted imaginary (Fodor, 2011). From the point of view of the technologies used, “the environmental footprint of generative AI, particularly in terms of greenhouse gas emissions and water consumption, remains considerable” (Le Goff 2023), There is therefore the need to reflect on the development of sustainable, environmentally-friendly AI solutions, but also on the ethical, responsible and sustainable use of these tools. Among the questions to be addressed by the speakers are: how can we ensure that the data processed by AI faithfully represents environmental reality, without exaggeration or distortion? What kinds of biases can occur in the processing of environmental data, and how do these biases influence the perception of environmental issues? To what extent does AI contribute to shaping an ecological imaginary that could distance the public from scientific reality? How can AI be used to reinforce responsible communication and avoid the effects of sensationalism or manipulation of the environmental discourse? What are the direct environmental impacts of using AI in environmental communication, and how can they be measured? What ethical or normative frameworks could be adopted to guide the development of AI technologies that respect the principles of sustainable development? Axis 5: Technology, humans and post-humans Whether via disembodied robots (conversational agents) or embodied robots (NAO, SOPHIA...) (Dolbeau-Bandin 2021; Dolbeau-Bandin and Wilhelm 2022), AI makes it possible to have a conversation, manage complex tasks or even live new experiences in virtual universes (metaverses)... These connections (Rosental 2021) between humans and machines make several visions coexist: assimilation of man to machine; cooperation; symbiosis (Brangier et al. 2009; De Rosnay 2018; Ertzscheid 2024); entanglement referring humans to their relationships with interfaces and sociotechnical devices (wearable devices and other Internet of Things (IOT) artifacts (Jeannin 2022), virtual/augmented reality sensors (Bonfils 2015). The boundaries between organic and inorganic, living and artificial are blurring (Tisseron 2018). Aside from the undeniable opportunities, the analysis and processing of massive data facilitated by AI (Boyd and Crawford 2012; Soudoplatoff 2018; Villani 2018) and coupled with this bodily digital materiality are not without effects on humans. Several avenues can be explored, for example: the question of the human and the post-human in a neoliberal/capitalist and techno-scientific ideological context; the body in the making (experiments, modifications, virtualization, representations); the place and status of the body within an expanded informational system with its inherent risks, communication with these disembodied conversational agents and embodied robots.... Axis 6: Interculturality When ChatGPT is asked about the link between AI and interculturality, many contributions are put forward: “it facilitates translation and simultaneous interpretation, it personalizes language and culture learning, it enables cultural simulations, its algorithms analyze feelings and cultural trends on social networks, it also helps with online mediation and intercultural conflict prevention, etc.”. The prevailing logic is that the tool is there to facilitate understanding, which can create the illusion of easier intercultural communication (Yang et al. 2024). However, we run the risk of impoverishing this communication by replacing authentic interactions between individuals with ready-made standard answers that limit the richness of human exchanges (Heddad 2024). At a time when AI is developing, it is important to take document practices related to interculturality (Oustinoff 2019) and to consider how they contribute or could contribute to enriching intercultural interactions (Dai and Hua 2024). Axis 7: Implicit and identities The implicit is a foundation of collective identities (Roth 2022a). It allows us to convey the messages of invention of tradition that preside over the unifications of nation-states (Hobsbawm and Ranger 1983), to reinforce inclusion through presupposition and exclusion through implication (Kerbrat-Orecchioni 1986), to circumvent censorship during authoritarian political mutations, and finally to naturalize identities for the purpose of making them incontestable (Roth 2022b), in other words, it is the support of unquestioned convictions and the tool of manipulations (Cervulle and Quemener 2012). For generative AI, this medium of invisible mediations is essential: mastering it is one of the main challenges posed to neural networks (notion of “common sense”; Le Cun 2023), and its use by the powerful GAFAMs could profoundly influence identities, making enlightened sociotechnical vigilance necessary. Expected contributions concern all forms of the unsaid in connection with AI: their generation, interpretation, and use. Axis 8: AI and libraries In Axis 8, we look at how the profession of librarian is being impacted by the rise of generative artificial intelligence (GAI). More specifically, this axis pursues three objectives: (1) Qualify the concrete transformations of the librarian's job under the effect of AGI (Jacob et al. 2022). (2) Measure the factors influencing these transformations and their impact on librarians' working conditions. (3) Develop thinking about users' emerging informational practices and the adaptations that librarians can make (Chaudhry and Iqbal 2021; Guérin 2012). Axe 9 Preservation of the linguistic and cultural diversity This section will also examine the challenges posed by the development of digital communication on a global scale, in relation to the preservation of linguistic and cultural diversity. Indeed, this development mainly benefits languages that are already widely spoken and undoubtedly contributes to maintaining their domination, with the risk of glottophagy (Calvet 1974) that the latter entails, but the development of digital communication also enables languages that were traditionally confined to local orality to benefit from an unexpected visibility and distribution for their speakers, provided that automatic processing resources and tools are adapted to the characteristics of these languages (Bernhard et al. 2021). To what extent can machines become agents in the revitalization of languages and cultures (Krebs 2024)? Axis 10: AI and bias in language and communication Texts created by generative AI are based on large language models (LLMs) whose source is neither known nor verified. These models do not take into account aspects such as the feminization of job names, non-binary aspects of gender or inclusive writing. Communication tends to reproduce the same stereotypes (Bernheom et al. 2019), as the vast majority of texts used for training privilege the generic masculine. Many of the data used for data training come from the translation of texts from English into another language. The aim is therefore to study the artificial speech generated by the AI, to investigate possible hallucinations (Park and Lee 2024) and to propose corrective solutions. Furthermore, the way humans communicate has been profoundly altered: written communication can now be spontaneous and immediate, and thanks to AI, an Internet connection may suffice to translate a text from one language to another. But not all human beings, nor the languages they speak or the cultures to which they belong, have equal access to the digital resources and tools they need to do this, so that the latter have become a determining factor in the major or minor relationships that exist between languages and cultures around the world (Krebs 2024). For all axes, proposals must integrate the dimensions of “ethics”, “inclusion” and/or “remediation”. They must indicate the axis to which they relate, be written in French or English, and contain around 4,000 characters with the problem, research question and hypotheses, plus 10 bibliographical references according to the APA7 standard. Selection will be based on the usual scientific standards of double-blind reading. The conferences will in holded in French or English, the papers may be submitted in French, English or Spanish. Scientific Committee Abdallah May, Université libanaise, Beyrouth, Liban Amadio Nicolas, Université de Strasbourg, France Aoudia Nacer, Université de Béjaia, Algérie Akhiate Yassine, Université Mohammed V, Rabat, Maroc Azemard Ghislaine, Université Paris 8, France Béasse Muriel, Université de Haute-Alsace, France Bendahan Mohamed, Université Mohammed V, Rabat, Maroc Brassier Cécilia, Université de Clermont Auvergne, France Chaouni Nawel, Université de Toulouse, France Chiachiri Roberto, Methodist University of Sao Paolo. Brésil Chevry Pébayle Emmanuelle, Université de Strasbourg, France Commissaire Eva, Université de Strasbourg, France Damome Etienne, Université de Bordeaux-Montaigne, France D’Apote-Vassiliadou Hélène, Université de Strasbourg, France Djengue Samuel, Université Abomey-Calavi, Bénin Dolbeau-Bandin Cécile, Université de Caen Normandie, France Dragan Adela, Université Danarea de Jos, Galati, Roumanie El Khoury Farhat, Université de Strasbourg, France El Mendili Soumaya, Université Mohammed V, Rabat, Maroc Erhart Pascale, Université de Strasbourg, France Faisal Bakti Andi, Islamic University, Jakarta, Indonésie Fusaro Magda, Université de Québec à Montréal, Canada Gardère Elizabeth, Université de Bordeaux, France Geiger-Jaillet Anémone, Université de Strasbourg, France Guerrero Manuel Alejandro, Universidad ibero-americana, Mexico, Mexique Hellal Mohamed, Université de Sousse, Tunisie Hounnou Cédric, Université de la Côte d’Azur, France Jeannin Hélène, Université de Strasbourg, France Jreijery Roy, Université libanaise, Liban Kabore Dimkêeg Sompassaté Parfait, Université Thomas Sankara, Burkina-Fasso Krebs Viola, Université de Genève, Suisse Liu Lu, Université de Clermont Auvergne, France Merah Aissa, Université de Béjaia, Algérie Olmedo Eric, Ho Chi Minh City University, Vietnam Picot Jérémy, Université de Strasbourg, France Puica Gisela, Université Suceava, Roumanie Rico De Stoledo Carmen, Université du Québec à Montréal, Canada Roth Catherine, Université de Haute-Alsace et INRIA, France Salles Chloé, Université de Grenoble-Alpes, France Serrano Yeny, Université de Strasbourg, France Tendeng Frédéric, Université de Strasbourg, France Todirascu Amalia, Université de Strasbourg, France Trestini Marc, Université de Strasbourg, France Viallon Philippe, Université de Strasbourg, France Zerva Maria, Université de Strasbourg, France Organizing Committee University of Strasbourg: Unesco Chair “Journalistic and Media Practices”, ITI LiRiC (Language, Inclusion, Remediation Interculturality and Communication), IUT Robert Schuman, Information-Communication Department, Digital Culture Center CERREV (University of Caen Normandie) Communication et Sociétés (University of Clermont Auvergne) |
Online user: 1 | Privacy |