Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Numerical software, common in scientific computing or embedded systems, inevitably uses an approximation of the real arithmetic in which most algorithms are designed. Finite-precision arithmetic, such as fixed-point or floating-point, is a common and efficient choice, but introduces an uncertainty on the computed result that is often very hard to quantify. We need adequate tools to estimate the errors introduced in order to choose suitable approximations which satisfy the accuracy requirements.
I will present a new programming model where the scientist writes his or her numerical program in a real-valued specification language with explicit error annotations. It is then the task of our verifying compiler to select a suitable floating-point or fixed-point data type which guarantees the needed accuracy. I will show how a combination of SMT theorem proving, interval and affine arithmetic and function derivatives yields an accurate, sound and automated error estimation which can handle nonlinearity, discontinuities and certain classes of loops.
Additionally, finite-precision arithmetic is not associative so that different, but mathematically equivalent, orders of computation often result in different magnitudes of errors. We have used this fact to not only verify but actively improve the accuracy by combining genetic programming with our error computation with encouraging results.
[-]
Numerical software, common in scientific computing or embedded systems, inevitably uses an approximation of the real arithmetic in which most algorithms are designed. Finite-precision arithmetic, such as fixed-point or floating-point, is a common and efficient choice, but introduces an uncertainty on the computed result that is often very hard to quantify. We need adequate tools to estimate the errors introduced in order to choose suitable ...
[+]
68Q60 ; 65G50 ; 68N30 ; 68T20
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Geometry of Interaction, combined with translation of lambda-calculus into MELL proof nets, has enabled an unconventional approach to program semantics. Danos and Regnier, and Mackie pioneered the approach, and introduced the so-called token-passing machines.
It turned out that the unconventional token-passing machines can be turned into a graphical realisation of conventional reduction semantics, in a simple way. The resulting semantics can be more convenient than the standard (syntactical) reduction semantics, in analysing local behaviour of programs. I will explain how, in particular, the resulting graphical reduction semantics can be used to reason about observational equivalence between programs.
[-]
Geometry of Interaction, combined with translation of lambda-calculus into MELL proof nets, has enabled an unconventional approach to program semantics. Danos and Regnier, and Mackie pioneered the approach, and introduced the so-called token-passing machines.
It turned out that the unconventional token-passing machines can be turned into a graphical realisation of conventional reduction semantics, in a simple way. The resulting semantics can be ...
[+]
68-01 ; 68N18 ; 68N30
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Software faults have plagued computing systems since the early days, leading to the development of methods based on mathematical logic, such as proof assistants or model checking, to ensure their correctness. The rise of AI calls for automated decision making that incorporates strategic reasoning and coordination of behaviour of multiple autonomous agents acting concurrently and in presence of uncertainty. Traditionally, game-theoretic solutions such as Nash equilibria are employed to analyse strategic interactions between multiple independent entities, but model checking tools for scenarios exhibiting concurrency, stochasticity and equilibria have been lacking.
This lecture will focus on a recent extension of probabilistic model checker PRISM-games (www.prismmodelchecker.org/games/), which supports quantitative reasoning and strategy synthesis for concurrent multiplayer stochastic games against temporal logic that can express coalitional, zero-sum and equilibria-based properties. Game-theoretic models arise naturally in the context of autonomous computing infrastructure, including user-centric networks, robotics and security. Using illustrative examples, this lecture will give an overview of recent progress in probabilistic model checking for stochastic games, including Nash and correlated equilibria, and highlight challenges and opportunities for the future.
[-]
Software faults have plagued computing systems since the early days, leading to the development of methods based on mathematical logic, such as proof assistants or model checking, to ensure their correctness. The rise of AI calls for automated decision making that incorporates strategic reasoning and coordination of behaviour of multiple autonomous agents acting concurrently and in presence of uncertainty. Traditionally, game-theoretic solutions ...
[+]
68N30 ; 68Q60 ; 91A15
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
From a (partial) differential equation to an actual program is a long road. This talk will present the formal verification of all the steps of this journey. This includes the mathematical error due to the numerical scheme (method error), that is usually bounded by pen-and-paper proofs. This also includes round-off errors due to the floating-point computations.
The running example will be a C program that implements a numerical scheme for the resolution of the one-dimensional acoustic wave equation. This program is annotated to specify both method error and round-off error, and formally verified using interactive and automatic provers. Some work in progress about the finite element method will also be presented.
[-]
From a (partial) differential equation to an actual program is a long road. This talk will present the formal verification of all the steps of this journey. This includes the mathematical error due to the numerical scheme (method error), that is usually bounded by pen-and-paper proofs. This also includes round-off errors due to the floating-point computations.
The running example will be a C program that implements a numerical scheme for the ...
[+]
68N30 ; 68Q60 ; 68N15 ; 65Y04 ; 65G50
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This talk is about verified numerical algorithms in Isabelle/HOL, with a focus on guaranteed enclosures for solutions of ODEs. The enclosures are represented by zonotopes, arising from the use of affine arithmetic. Enclosures for solutions of ODEs are computed by set-based variants of the well-known Runge-Kutta methods.
All of the algorithms are formally verified with respect to a formalization of ODEs in Isabelle/HOL: The correctness proofs are carried out for abstract algorithms, which are specified in terms of real numbers and sets. These abstract algorithms are automatically refined towards executable specifications based on lists, zonotopes, and software floating point numbers. Optimizations for low-dimensional, nonlinear dynamics allow for an application highlight: the computation of an accurate enclosure for the Lorenz attractor. This contributes to an important proof that originally relied on non-verified numerical computations.
[-]
This talk is about verified numerical algorithms in Isabelle/HOL, with a focus on guaranteed enclosures for solutions of ODEs. The enclosures are represented by zonotopes, arising from the use of affine arithmetic. Enclosures for solutions of ODEs are computed by set-based variants of the well-known Runge-Kutta methods.
All of the algorithms are formally verified with respect to a formalization of ODEs in Isabelle/HOL: The correctness proofs are ...
[+]
68T15 ; 34-04 ; 34A12 ; 37D45 ; 65G20 ; 65G30 ; 65G50 ; 65L70 ; 68N15 ; 68Q60 ; 68N30 ; 65Y04