You must enable JavaScript to use this site
  • CICM 2023 Invited Speakers

      • Frédéric Blanqui (INRIA): Progresses on proof systems interoperability
        Abstract: Proof system interoperability is a research topic which started about 30 years ago, and received some attention with the formalization of Hales proof of Kepler Conjecture in the years 2000 as parts of this proof were initially formalized in two different provers. Then, proof system interoperability received new interest in the years 2010 with the increasing use of automated theorem provers in proof assistants.
        At about the same time appeared a new logical framework, called Dedukti, which extends LF by allowing the identification of types modulo some equational theory. It has been shown that various logical systems can be nicely encoded in Dedukti, and various tools have been developed to actually represent proofs from various provers in Dedukti, and to translate them to other provers.
        In this talk, I will review some of these works and tools, and present recent efforts to handle entire libraries of proofs.
      • Mateja Jamnik (University of Cambridge): How can we make trustworthy AI?
        Abstract: Not too long ago most headlines talked about our fear of AI. Today, AI is ubiquitous, and the conversation has moved on from whether we should use AI to how we can trust the AI systems that we use in our daily lives. In this talk I look at some key technical ingredients that help us build confidence and trust in using intelligent technology. I argue that intuitiveness, adaptability, explainability and inclusion of human domain knowledge are essential in building this trust. I present some of the techniques and methods we are building for making AI systems that think and interact with humans in more intuitive and personalised ways, enabling humans to better understand the solutions produced by machines, and enabling machines to incorporate human domain knowledge in their reasoning and learning processes.
      • Lawrence C. Paulson (University of Cambridge): Large-Scale Formal Proof for the Working Mathematician - Lessons learnt from the Alexandria Project
        Abstract: ALEXANDRIA is an ERC-funded project that started in 2017, with the aim of bringing formal verification to mathematics. The past six years have seen great strides in the formalisation of mathematics and also in some relevant technologies, above all machine learning. Six years of intensive formalisation activity seem to show that even the most advanced results, drawing on multiple fields of mathematics, can be formalised using the tools available today.
      • Martina Seidl (Johannes Kepler University Linz): Never trust your solver: Certificates for SAT and QBF
        Abstract: Many problems for formal verification and artificial intelligence rely on advanced reasoning technologies in the background, often in the form of SAT or QBF solvers. Such solvers are sophisticated and highly tuned pieces of software, often too complex to be verified themselves. Now the question arises how one can one be sure that the result of such a solver is correct, especially when its result is critical for proving the correctness of another system. If a SAT solver, a tool for deciding a propositional formula, returns satisfiable, then it also returns a model which is easy to check. If the answer is unsatisfiable, the situation is more complicated. And so it is for true and false quantified Boolean formulas (QBFs), which extend propositional logic by quantifiers over the Boolean variables. To increase the trust in a solving result, modern solvers are expected to produce certificates that can independently and efficiently be checked. In this paper, we give an overview of the state of the art on validating the results of SAT and QBF solvers based on certification.

  • News

Privacy policy and legal information
Last modified: June 27 2023 12:07:45 CEST