Conference > Invited speakers

Frank van Harmelen, honorary president, 

 FrankVanHarmelen

Current position: Professor, head of the Knowledge Representation & Reasonning research group of the computer science department of the VU university of Amsterdam, head of The Network Institute

Web page: https://www.cs.vu.nl/~frank.van.harmelen/

 

Presentation support: Franck-Van-Harmelen-EGC-2019

Title of the presentation: Combining learning and reasoning: new challenges for knowledge graphs

Abstract:

The question on how to combine learning with reasoning is widely seen as one of the major challenges for AI. Knowledge Graphs are now well established as a formalism for knowledge representation and reasoning, with large scale adaptions in industry (Google search, Apple's Siri, Amazon, Uber, Airbnb, BBC, Reuters, and many others).
Besides their use for reasoning tasks, knowledge graphs have also shown promise as a formalism to combine reasoning with learning. They have been used as a source of labels for semi-supervised learning, machine learning has been used to generate knowledge graphs, using knowledge graphs can be used to construct post-hoc explanations for machine learning, to name just a few. Central questions in this talk will be: what is the progress that has been made on combining knowledge graphs with machine learning to date, and what are the promises and challenges in both the near and the long term?

Ioana Manolescu

Iona Manolescu

Current position: Senior INRIA researcher (DR1), head of the INRIA/LIX CEDAR team project, part-time professor at Ecole Polytechnique

Web page: http://pages.saclay.inria.fr/ioana.manolescu/

 

Presentation support: Ioana-Manolescu-EGC-2019

Title of the presentation: Computational fact-checking: state of the art, challenges, and perspectives

Résumé:

The tremendous value of Big Data has been noticed of late also by the media, and the term "data journalism'' has been coined to refer to journalistic work inspired by digital data sources.  A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community and referred the process of verifying and ensuring the accuracy of published media content; more recently, its meaning has shifted to the analysis of politics, economy, science, and news content shared in any form, but first and foremost on the Web. A very lively area of digital content management research has taken up these problems and works to propose foundations (models), algorithms, and implement them through concrete tools. In my talk, I will show why I believe the data and knowledge management communities should get involved, cast  computational fact-checking as a concent management problem, present some of the research results attained in this area, and point out areas where more work is needed. This talk is mostly based on research carried within the ANR ContentCheck project (http://contentcheck.inria.fr)

Krishna P. Gummadi

Gummadi

Current position: Professor, head of the research group Networked Systems, Max Planck Institute for Software Systems (MPI-SWS), Germany

Web page: https://people.mpi-sws.org/~gummadi

Presentation support: Krishna-Gummadi-EGC-2019

Title of the presentation: Foundations for Fair Algorithmic Decision Making

Summary:

Algorithmic (data-driven learning-based) decision making is increasingly being used to assist or replace human decision making in a variety of domains ranging from banking (rating user credit) and recruiting (ranking applicants) to judiciary (profiling criminals) and journalism (recommending news-stories). Recently concerns have been raised about the potential for discrimination and unfairness in such algorithmic decisions. Against this background, in this talk, I will discuss the following foundational questions about algorithmic unfairness:

  1. How do algorithms learn to make unfair decisions?
  2. How can we quantify (measure) unfairness in algorithmic decision making?
  3. How can we control (mitigate) algorithmic unfairness? i.e., how can we re-design learning mechanisms to avoid unfair decision making?

Roberto Di Cosmo

DiCosmo

 

 

 

 

 

 

Current position: director of Software Heritage at Inria and Computer Science Professor at University Paris Diderot, IRIF

Web page: http://www.dicosmo.org

Presentation support: Roberto-Di-Cosmo-EGC-2019

Bio:

After obtaining a PhD in Computer Science at the University of Pisa, Roberto Di Cosmo was associate professor for almost a decade at Ecole Normale Supérieure in Paris, and became a Computer Science full professor at University Paris Diderot in 1999. He is currently on leave at Inria. He has been actively involved in research in theoretical computing, specifically in functional programming, parallel and distributed programming, the semantics of programming languages, type systems, rewriting and linear logic. His main focus is now on the new scientific problems posed by the general adoption of Free Software, with a particular focus on static analysis of large software collections, that were at the core of the european reseach project Mancoosi. Following the evolution of our society under the impact of IT with great interest, he is a long term Free Software advocate, contributing to its adoption since 1998 with the best-seller Hijacking the world, seminars, articles and software. He created the Free Software thematic group of Systematic in October 2007, and since 2010 he is director of IRILL, a research structure dedicated to Free and Open Source Software quality. In 2016, he created and directs Software Heritage, an initiative to build the universal archive of all the source code publicly available.

Title of the presentation: Software Heritage: can we handle all our source code?

Abstract:
Software Heritage is a non profit initiative whose ambitious goal is to collect, preserve and share the source code of all software ever written, with its full development history, building a universal source code software knowledge base.  Software Heritage addresses a variety of needs: preserving our scientific and technological knowledge, enabling better software development and reuse for society and industry, fostering better science, and building an essential infrastructure for large scale, reproducible software studies. We have already collected over 4 billions unique source files from over 80 millions repositories. Handling this gigantic data set is a humbling undertaking, and requires novel approaches to store and query it in a way that allows to cope with the growth of collaborative software development. In this talk, we will highlight the new challenges and opportunities that Software Heritage brings up.

 

Online user: 1