Digitization and the rise of artificial intelligence forecast radical change on all aspects of human practice, especially given the ever-improving abilities of algorithms in tasks like pattern recognition and their practical application. Powerful technology arises from AI research, opening the gate for various forms of cultural and societal engineering, i.e., a reshaping of culture and society by dint of algorithmic models and “intelligent” applications. To date, however, even highly-trained algorithms are only outperforming humans in very specific tasks with limited scope (e.g., chess), as opposed to banal – yet cognitively highly complex – everyday actions like assessing the immediate consequences of a lie.
Thus, although the development of artificial intelligence is still in its beginnings, it has already triggered an enormous rush of utopian and dystopian thinking. While some dream of immortality and the vanquishing of poverty, disease, and warfare, others foresee a grim future for those parts of humanity that will find themselves outpaced by technology. Potential consequences of the changes imposed by technological advancement on human practice reach from the level of the individual, through cultural techniques, to the organization of society as a whole, raising fundamental questions, such as:
- How does artificial intelligence impact our understanding of the human mind, especially in relation to the role of its computational equivalents that reach more and more aspects of everyday life (e.g., chatbots, driverless mobility, risk assessment software in the banking and insurance sector)?
- What are the consequences of digitization and machine learning algorithms for education and our understanding of learning and creativity (e.g., in schooling through adaptive tutors, but also against the background of our current notion of creativity as a unique human ability)?
- How will the increasing use of computational methodology change the ways we relate to the past and envision the future (e.g., by reading), both in academia and in society? How can the enrichment of algorithmic models with methods and results from the humanities shape and improve computational assessment of human practice (e.g., data mining of big text corpora, automated translation, racial bias in neural networks)?
- How does the use of artificial intelligence in all domains of human practice influence how we deal with complexity (e.g., of society) and human control thereof? Can computational methods help to reduce, organize, and analyze cultural complexity, or do they pose a threat to human control over different aspects of the lifeworld (e.g., security and network technology, automation of industrial production, autonomous weaponry)?
Against the background of such questions, the conference aims to foster an open and critical reflection on the consequences of cultural and societal engineering. Namely, algorithmic models influence society and culture far beyond the limited scope of their practical application. They reshape communicative processes as well as the ways we interact with the world; and thus, subject culture and society to new forms of ‘engineering’. By including scholars and scientists from the humanities and social sciences (in the broad sense of the word) as well as from AI research, this conference focuses on the implementation of advanced technology in different domains of everyday life, i.e., in relation to concrete areas of application (e.g., pattern recognition) as well as associated human-machine interfaces (e.g., devices and applications). In doing so, the conference investigates not only opportunities and shortcomings of AI research, but also implications and potential structural effects of technological innovation for the organization of societal practice (e.g., work) and techniques of cultural self-reflection (e.g., history). It will not only ask what technologies can do (or will be able to do in the future), but also how these capabilities can be compared and related to their human equivalents, e.g., perception, cognition, and communication.
Accordingly, the conference will establish a reflection on the methodological differences between the different areas of scholarship involved. Hosted by a faculty that unites humanities and social sciences, the conference aims at exploring how empirical sciences based in statistical evidence and scholarship grounded in interpretation can learn from one another, how they can profit from computational methodology, and how they are potentially altered by information technology. A central perspective in this context relates to the question of how humanities and social sciences develop and implement models – and to what extent they reflect upon the preconditions and implications of their ways of modeling their objects of interest. It is assumed that while the humanities can learn a lot from the rigorous procedures of building, testing, and evaluating models in (data-driven) scientific research, the sciences can surely benefit from the critical approach the humanities take when dealing with concepts and models for the (interpretation-driven) analysis and assessment of human practice, e.g., in assessing unintended consequences of models, or their role in society as a whole.
The conference will focus on the following areas of interest, each of which will assemble representatives of different disciplines:
● Consciousness & computing: How do models of the mind and of consciousness relate to models of data processing as used in AI development?
● Experience & effectiveness: To what extent are AI algorithms capable of self-experience, or does their effectiveness rely precisely on them being not capable of experience?
● Action & assessment: How does our notion of social activity and of assessing others change with the the emergence of AI as a new type of intelligent, but unconscious social actor?
● Neurons & networks: How can models of “self-learning” neural networks be compared to models of learning developed by psychology, the cognitive sciences, and pedagogy?
● Classification & creativity: How do our notions of genuine human competencies, such as complex pattern recognition, strategic thinking, and exploration, change with the emergence of AI?
● Exploration & education: How does AI change the ways in which we try to foster innovation and reshape educational systems?
● Production & processing: How does AI change the ways in which humans produce and process language, texts, and meaning?
● Meaning & machines: To what extent can human procedures of establishing meaning and sense be established in ‘intelligent’ algorithms?
● Cognition & corpora: How do the emergence of AI and new ways of corpus analysis change the ways in which we interpret the past, envision the future, and process material created by humans in general?
● Complexity & control: How can AI foster the ways in which we control societal and environmental complexity – and how might it at the same time bring about a loss of human control?
● Analysis & adaptation: How does AI enhance our capability of analysing complex systems and thereby make processes of decision-making more adaptive?
● Transparency & transformation: How does AI affect the transparency of societal and political processes, and how does it transform our engagement with complex ecologies?
A publication of the conference papers is planned.
Organiser: Faculty of Language and Literature, Humanities, Arts and Education, Prof. Dr Georg Mein, University of Luxembourg
- Armin Grunwald (Karlsruher Institut für Technologie)
- Jürgen Fohrmann (Rheinische Friedrich Wilhelms-Universität Bonn)
- Lyse Langlois (Université Laval, Quebec)
- Giuseppe Longo (Ècole Normale Supérieure, Paris)
- Joanna Zylinska (Goldsmiths, University of London)
Contact: Dr Isabell Baumann, University of Luxembourg, Campus Belval, 11, Porte des Sciences, L- 4366 Esch/Alzette, Phone: +352 46 66 44 9331, firstname.lastname@example.org