news

Dec 12, 2024 We’re launching www.dailogues.ai 🚀, our discursive space around AI!
Nov 20, 2024 Back at Rewe Digital, introducing the fundamentals of language models to a group of managers! 🤖 
Sep 19, 2024 My latest talk on Trust in AI is now online: https://www.youtube.com/watch?v=PhNdlgwNlsU ✨🎥
Sep 12, 2024 👀 I’m attending the Conference on the Ethics of Conversational Agents & Generative AI, hosted by the Munich Center for Machine Learning.
Sep 09, 2024 My next talk at Deutsche Telekom’s AI Competence Center is scheduled for September 9 at 13:00 via Microsoft Teams (Please contact me for the link).
Title: Can We Trust AI?
Abstract: Trustworthiness in the AI scene is often grounded in the notion of having a system that produces reliable and factually correct outputs. But is believing that an output is true equal to “trusting” the system that created it? Human trust is a complex philosophical and psychological notion. Trust may be a state of belief as well as a form of social binding. For example, trust between humans often implies a certain degree of mutual vulnerability. Taking this into account, is it even possible to trust in machines?
Aug 30, 2024 I’m presenting a few machine learning fundamentals for understanding LLMs at Rewe Digital. 🤖
Jul 17, 2024 I spoke again at Deutsche Telekom’s AI Competence Center about agency and AI.
Title: Agentic AI: What Can Philosophy Teach Us About Agency?
Abstract: Agentic AI, such as AutoGen, Crew AI, and LangGraph, has recently garnered significant attention. Stanford’s Andrew Ng, renowned for his neural network teachings on Coursera, has also lauded the use of multi-agent design patterns with LLMs in The Batch. The concept is straightforward: multiple LLM-powered agents collaborate to solve problems. By incorporating local memories and internal reflection capabilities, the resulting system of agents can be both powerful and complex. However, as critical AI philosophers, we must ask ourselves: do these agents truly possess agency, that is the ability to take action? To explore the potential of non-human agents, this talk will introduce fundamental definitions and conceptual tools to discuss agency in the context of LLM agents. It appears that agency is a continuous attribute, rather than a categorical one.
Jun 06, 2024 In my 45 minutes presentation, I discussed at Deutsche Telekom’s AI Competence Center LLMs and the possibility of lying.
Title: When Moral Rules Reach their Limits: Should LLMs Lie?
Abstract: One of the most famous ideas in philosophy is Immanuel Kant’s categorical imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.” This imperative gives us a concrete solution as to how to act morally. It allows us to define ethical rules from non-ethical ones. At the same time, Kant is also known for proposing that to not lie should be such a binding moral rule. However, we encounter many situations where humans lie, and also some where lying appears even ethically licensed. If so, following a rule such as to not lie does not seem to be always what we want from an ethical point of view. If we, as humans, allow degrees of variabilities to truth-telling, what does that mean if we try to “teach” an LLM to only produce true and factual content?
Apr 12, 2024 Happy to discuss at Deutsche Telekom’s AI Competence Center responsibility and AI.
Title: Why “Responsible AI” Is a Misnomer
Abstract: The meaning of responsibility is closely attached to the idea of “responding” in a justified sense for one’s deeds. For that matter, responsibility is also often interpreted as “answerability”. The same holds for the German “Verantwortung” and the correlated idea of “Rede und Antwort stehen”. However, (current) AI cannot be responsible for different reasons. Any existing AI system up to date lacks the capacity to reasonably justify itself. Another fundamental aspect of taking responsibility is also to accept consequences for certain actions, such as punishments for unlawful behavior. However, a machine learning system cannot be punished in any meaningful way. In short: AI is not responsible: Responsible AI is a misnomer. But who is responsible? And how can responsibility with AI look like?