Objectives
As computers and communications technology advance, greater opportunities arise for intelligent mathematical computation. While computer algebra, automated deduction, mathematical publishing and novel user interfaces individually have long and successful histories, we are now seeing increasing opportunities for synergy among these areas.
The conference was organized by
Serge Autexier (DFKI) and Michael Kohlhase (JUB), took place at Jacobs University in Bremen and
consisted of five tracks
The overall programme was organized by the General Program Chair Johan Jeuring.
The proceedings are published by Springer and are available here.
Work-in-progress papers have been published by CEUR-WS.org together with the papers of the OpenMath and MathUI workshops.
Invited talks were given by:
- Yannis Haralambous, Département Informatique, Télécom Bretagne:
Text Mining Methods Applied to Mathematical Texts
Up to now, flexiform mathematical text has mainly been processed with the intention of formalizing
mathematical knowledge so that proof engines can be applied to it. This approach can be compared with the symbolic
approach to natural language processing, where methods of logic and knowledge representation are used to
analyze linguistic phenomena. In the last two decades, a new approach to natural language processing has emerged,
based on statistical methods and, in particular, data mining. This method, called text mining, aims to process large
text corpora, in order to detect tendencies, to extract information, to classify documents, etc. In this talk I will present
math mining, namely the potential applications of text mining to mathematical texts. After reviewing some existing works
heading in that direction, I will formulate and describe several roadmap suggestions for the use and applications of statistical
methods to mathematical text processing: (1) using terms instead of words as the basic unit of text processing,
(2) using topics instead of subjects ("topics" in the sense of "topic models" in natural language processing, and "subjects"
in the sense of various mathematical subject classifications), (3) using and correlating various graphs extracted from
mathematical corpora, (4) use paraphrastic redundancy, etc. The purpose of this talk is to give a glimpse on potential
applications of the math mining approach on large mathematical corpora, such as arXiv.org.
- Conor McBride, Department of Computer and Information Sciences, University of Strathclyde:
A Prospection for Reflection
Gödel’s Incompleteness theorems tell us that there are effective limitations on
the capacity of logical systems to admit reasoning about themselves. However,
there are solid pragmatic reasons for wanting to try: we can benefit
considerably by demonstrating that systematic patterns of reasoning (and
programming, of course) are admissible. It is very useful to treat goals as data
in order to attack them with computation, adding certified automation to
interactive proof tools, delivering the efficiency required to solve
compute-intensive problems with no loss of trust.
Dependent type theory provides a ready means of reflection: goals become types,
and types may be computed from data. This technique has proven highly successful
when tightly targeted on specific problem domains. But we may yet ask the bold
question of how large a universe of problems we can effectively reflect: how
much of our type theory can we encode within its own notion of data? To what
extent can our type theory capture its own typing discipline? Might we construct
a hierarchy of type theories where the whole of each lower level can be
reflected at higher levels? In this talk, I shall outline grounds for modest
optimism and propose a plan of campaign. The obstacles may turn out to be as
fascinating as the objective. The reward, if we can reach it, is a flexible
technology for certified automation in problem-solving, honestly articulating
what at least computers can do.
- Cezar Ionescu, Potsdam Institute for Climate Impact Research:
Increasingly Correct Scientific Programming
Dependently typed languages promise an elegant environment for
programming from specifications: the properties that a program should
satisfy are expressed as logical formulas and encoded via the
Curry-Howard isomorphism as a type, a candidate implementation should be
a member of this type, and the type checker verifies whether this is
indeed the case. But sometimes, the type checker demands ``too much'':
in particular, in scientific programming, it often seems that one must
formalize all of real analysis before writing a single line of useful
code. Alternatively, one can use mechanisms provided by the language in
order to circumvent the type checker, and confirm that ``real
programmers can write Fortran in any language''. We present an example
of navigating between these extremes in the case of economic modeling.
At first, we use postulates in order to be able to reuse code, and
achieve a kind of conditional correctness (for example, we can find a
Walrasian equilibrium if we are able to solve a convex
optimization problem). We then remove more and more of the postulates,
replacing them with proofs of correctness, by using interval arithmetic
methods.
Workshops & Doctoral Programme
In addition to the five main tracks, CICM 2012 hosted a number workshops and a
doctoral programme.
|