This year AI is a primary focus of science funding. LMU faculties are well placed in the field, with an onboard assistant for astronauts, and systems that detect credit-card fraud and mitigate depression under development. Professor Julian Nida-Rümelin is analyzing the philosophical issues raised by AI and even theologians are caught up in the implications of AI.
To activate his electronic live-in companion on board the International Space Station late last year, German astronaut Alexander Gerst simply says “Wake up, Cimon!” Cimon’s response was “What can I do for you?” Cimon is the first autonomous robotic assistant equipped with artificial intelligence (AI) to fly in space. His human-like features were designed by medical professionals at LMU. “As a partner and fellow crew member, Cimon is intended to help ISS astronauts deal with the packed program of experiments, maintenance and repair work they are confronted with,” explains Dr. Judith-Irina Buchheim. According to her colleague Professor Alexander Choukèr, possible areas of application for such systems back on Earth include assisting engineers, researchers and physicians, tabulating patients’ symptoms and providing support and companionship for senior citizens who live alone.
Prompted by the efforts of China and the US, the German Government will invest 3 billion euros in machine learning in the coming 6 years. These monies will be used to create 100 new professorships, and for the realization of specific projects. Novel procedures for the pseudonymization and anonymization of data are also planned, to enable researchers to access a wider range of data sources. Over the next 4 years, LMU will receive a total of 6 million euros for the Munich Center for Machine Learning, and a further 730,000 euros for the collaborative project “Machine Learning With Knowledge Graphs” (MLwin), from the Federal Ministry of Education and Science (BMBF). In the coming three years, a total of 9.3 million euros will be available for AI-related projects at LMU. The focus will be on how AI systems interact with the environment, rather than on the design and construction of robots. The Bavarian State Government has also pledged to invest approximately 300 million euros in an expert network devoted to machine learning.
Such investment can yield handsome dividends, as Christian Wachinger’s Junior Research Group on the Analysis of Medical Imaging Data at LMU’s Hospital for Child and Adolescent Psychology shows. The group has developed a procedure that uses ‘neural networks’ to automatically ‘segment’ brain scans obtained by 3D MRT into anatomically coherent structures. Sounds complicated, but it means that physicians no longer have to wait for hours for the processing of the data: The new software can anatomically segment an MRI scan in less than 20 seconds. “This has far-reaching implications for the processing of data from large clinical studies,” Wachinger explains. The source code and the scans used to train the network are freely accessible, so other researchers can make use of the new method – which is also available as a web-based service. Wachinger is convinced that AI-based systems will in future allow doctors to spend more time listening to their patients.
Earlier diagnosis, thanks to AI
Professor Nikolaos Koutsouleris of the Hospital for Psychiatry and Psychotherapy uses pattern recognition algorithms and machine learning techniques to speed up and optimize diagnoses, prognoses and treatment options for patients suffering from serious psychiatric disorders. In this case, the trained learning algorithm makes use of only eight clinical variables – such as the range and gravity of symptoms and the severity of the patient’s overall condition – which can be evaluated within 5 minutes.
This data library can in future be extended to other prognostic and diagnostic models. “We hope to assemble a broad portfolio of tools that will enable us to recognize and interpret warning signs earlier and develop better ways of preventing mental diseases,” Koutsouleris says. To this end, he will work closely with Markus Bühner, holder of the Chair of Organizational Psychiatry in the Department of Psychology.
Bühner’s interest has recently turned to the informational value of mobile phone and driver monitoring data – from which he can deduce the user’s sex, for example. “Such information can be used to adapt systems and content to the needs of the individual user”, he explains. He and his team are now working with statisticians and informatics specialists at LMU on the design of a mobile sensing app for the evaluation of behavioral patterns. Mobile sensing data can be used to predict the level of an individual’s professional success on the basis of behavioral traits. Conversely, they can contribute to the early diagnosis of mental disease. “In this way, we can help people who are unaware that they need medical or psychological treatment,” Bühner says.
Dr. Alexander Fraser of the Faculty of Languages and Literatures is a specialist in automatic translation. “Not so long ago, computer linguists would formulate sets of rules to specify how computers should translate texts. We now let the computer deduce the rules for itself, an approach known as unsupervised machine learning” he says. He recently helped Britain’s National Health Service (NHS) to provide the information given on its website in languages other than English. In many households in the UK, particularly in Scotland, Polish or Romanian is now the language of everyday life. In future, unsupervised learning promises to make translations from lesser known languages – such as the Lower Sorbian spoken in Lower Lusatia – readily available.
The method is also being used by meteorologists Stephan Rasp and Professor George Craig in the Faculty of Physics. They seek to understand the dynamics of climate change. To model future climate change, enormous amounts of physical data must be encoded and analyzed – a task which challenges even the best supercomputers. To get around this problem, Rasp and Craig have trained a forecasting algorithm by feeding it with data from high-resolution simulations. The algorithm autonomously learns to recognize correlated patterns in the data. “In the end, the algorithm could reproduce the results obtained with conventional climate models very well, and was significantly more efficient,” Rasp says. Indeed, his mentor is convinced that the method has the potential to improve the precision of climate simulations. Perhaps the weather bulletins of the future will be real forecasts, not just educated guesses!
Reducing the incidence of suicides
AI is also en vogue in the Institute for Communication Science and Media Research. Dr. Mario Haim is studying its impact on journalism and public communication. Among the questions at issue is whether particular search results returned in response to queries of a political or health-related nature are systematically favored by popular search engines. “So far, fears in connection with the influence of filter bubbles seem overblown,” he says. Another topic of interest is using AI to reduce the incidence of suicides. Many search engines now respond to suicide-related queries by showing contact details for helplines and advisory services – a feature that has proven useful. Haim’s work in this context is designed to ensure that this information is provided in the most effective form possible.
The range of potential applications of AI in the industrial sector is enormous, which has obvious repercussions for the management side of education and research. For example, the AI-Based Information Systems Group is exploring human decision-making processes in conversations with robots. The Marketing and Strategy Cluster is looking at how machine-generated creativity is employed in marketing and how customers respond to it. Research projects in the Technology and Innovation Cluster ask whether AI systems can offer better solutions than humans. “The answer is ‘not yet,’” says Tobias Kretschmer, Professor of Business Administration. “But algorithmically generated recommendations are improving all the time.”
At the Faculty of Chemistry and Pharmacy’s Gene Center, the ability of various deep-learning strategies to recognize and classify the two-dimensional projections of molecular complexes imaged by cryo-electron microscopy is now being evaluated, as this is the primary bottleneck in the determination of their 3D structures. The Institute of Statistics initiated the first Elite Course in Data Science in Germany a full four years ago. Large datasets provide the essential input for AI. “But the data must be refined in appropriate ways,” says Professor Göran Kauermann, the Course Coordinator. This may explain why AI has infiltrated Law Schools. ‘Legal Tech’ is in the offing, which raises the question of whether the training of ‘legal eagles’ should rest on databases assembled by algorithms, without professional evaluation.
Deciphering cuneiform texts with the aid of AI
AI is also making headway in many other areas of the Humanities. In the Faculty for the Study of Culture, Professor Enrique Jiménez, in collaboration with the University of Baghdad, is engaged on a project to collate and reconstruct literary texts from Ancient Babylon with the help of AI. Museums all over the world possess innumerable clay tablets bearing texts recorded by the ancient civilizations of the Near East. But individual tablets, taken in isolation, may tell us very little. Jiménez’s research group is now digitizing this rich text corpus, and plans to develop algorithms that can autonomously assemble fragmentary texts into comprehensible historical documents.
Not everything that goes by the name of AI actually deserves the acronym. “It took me a while to realize that what I was dealing with was AI,” recalls Professor Thomas Seidl of the Institute of Informatics. The term really applies only to systems that are capable of inferring rules from input data and applying these rules to make independent decisions. This explains why Seidl’s own specialty – data mining – is the fundamental basis of AI. He and his team seek to develop analytical strategies that enable computers to recognize patterns (or departures from otherwise pervasive patterns) in large datasets. One area in which such techniques are used is in monitoring financial transactions. An unusually large purchase in South America paid for by a credit card issued by a German bank may set off alarm bells, for instance.
Seidl is also probing the social repercussions of the rise of AI. The Federal Government estimates that up to 1.6 million jobs could be lost to automation by 2025. “That is likely to lead to social disruption,” he says. However, AI will also create new job opportunities. The Ministry of Labor believes that AI could create up to 2.3 million new jobs, so that its overall impact on the job market is expected to be positive. “Machines need human arbitrators who can act as mediators and interpreters between the data, their electronic analysis and their real-life context,” says Seidl.
Professor Julian Nida-Rümelin in the Department of Philosophy is concerned with the philosophical dimensions of AI. “The development of, and interactions with, software-driven systems ranging from pattern recognition to robotics will not only have an influence on human lifestyles, but also on our self-image,” he affirms. This inevitably raises ethical questions in relation to software development and accessibility to private user data. These issues arise in contexts such as the design of autonomous vehicles, the transformation of the political sphere by social media and the role of chatbots in commercial communication and in the shaping of political opinion.
Meanwhile, AI has even found its way into the realm of theology. The Institute of Ethics in the Faculty of Protestant Theology is participating in a project entitled “Shaping AI”, which was initiated by Professor Moritz Grosse-Wentrup of the Department of Statistics. The ethicists involved will attempt to shed light on the meaning and implementation of social responsibility in relation to the development and application of AI. “A development that appears desirable from the perspective of an individual may have wider repercussions that turn out to be problematic from the point of view of society as a whole,” as Professor Reiner Anselm points out. “We need to find ways of bringing these issues to the attention of a broader public, to subject them to public debate and to reach a consensus on their impact on and connotations for ethical norms.” Many people are wary of the reach of AI. On the threshold of a new phase of information technology, it makes good sense not to lose sight of its wider social context. (David Lohmann)