En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 65F50 3 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
There are more and more computing elements in modern supercomputers. This increases the probability of computer errors. Errors that do not stop the computation are called soft errors or silent errors. Of course, they could have a negative impact on the output of the code. So, it is of interest to be able to detect these silent errors and to correct them.
In this talk we are concerned with the detection and correction of silent errors in the conjugate gradient (CG) algorithm to solve linear systems Ax = b with a symmetric positive definite matrix A. Silent errors in CG may affect or even prevent the convergence of the algorithm. We propose a new way to detect silent errors using a scalar relation that must be satisfied by CG variables,
$\alpha_{2 k-1}\tfrac{\left(A p_{k-1}, A p_{k-1}\right)}{\left(r_{k-1}, r_{k-1}\right)}=1+\beta_{k},(1)$
where rj's are the residual vectors, pj's the descent directions and
$\alpha_{k-1}=\tfrac{\left(r_{k-1}, r_{k-1}\right)}{\left(\mathrm{p}_{\mathrm{k}-1}, \mathrm{Ap}_{\mathrm{k}-1}\right)}$, $\beta_{\mathrm{k}}=\frac{\left(\mathrm{r}_{\mathrm{k}}, \mathrm{r}_{\mathrm{k}}\right)}{\left(r_{k-1}, r_{k-1}\right)}$
are the coefficients computed in $\mathrm{CG}$.
We study how relation (1) is modified in finite precision arithmetic and define a criterion to detect when this relation is not satisfied.
Checking relation (1) involves computing an additional dot product, but, as it was shown some time ago in [1] and more recently in [2], relation (1) can be used to introduce more parallelism in the algorithm.
Assuming that the input data $(A, b)$ is not corrupted, we model silent errors by bit flips in the output of some CG steps. When an error is detected in some iteration $\mathrm{k}$, we could restore the CG data from iteration $k-2$ to be able to continue the computation safely.
Numerical experiments will show the efficiency of this approach.[-]
There are more and more computing elements in modern supercomputers. This increases the probability of computer errors. Errors that do not stop the computation are called soft errors or silent errors. Of course, they could have a negative impact on the output of the code. So, it is of interest to be able to detect these silent errors and to correct them.
In this talk we are concerned with the detection and correction of silent errors in the ...[+]

65F10 ; 65F30 ; 65F50

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Polynomial optimization methods often encompass many major scalability issues on the practical side. Fortunately, for many real-world problems, we can look at them in the eyes and exploit the inherent data structure arising from the input cost and constraints. The first part of my lecture will focus on the notion of 'correlative sparsity', occurring when there are few correlations between the variables of the input problem. The second part will present a complementary framework, where we show how to exploit a distinct notion of sparsity, called 'term sparsity', occurring when there are a small number of terms involved in the input problem by comparison with the fully dense case. At last but not least, I will present a very recently developed type of sparsity that we call 'ideal-sparsity', which exploits the presence of equality constraints. Several illustrations will be provided on important applications arising from various fields, including computer arithmetic, robustness of deep networks, quantum entanglement, optimal power-flow, and matrix factorization ranks.[-]
Polynomial optimization methods often encompass many major scalability issues on the practical side. Fortunately, for many real-world problems, we can look at them in the eyes and exploit the inherent data structure arising from the input cost and constraints. The first part of my lecture will focus on the notion of 'correlative sparsity', occurring when there are few correlations between the variables of the input problem. The second part will ...[+]

65F50 ; 90C22 ; 90C23

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Polynomial optimization methods often encompass many major scalability issues on the practical side. Fortunately, for many real-world problems, we can look at them in the eyes and exploit the inherent data structure arising from the input cost and constraints. The first part of my lecture will focus on the notion of 'correlative sparsity', occurring when there are few correlations between the variables of the input problem. The second part will present a complementary framework, where we show how to exploit a distinct notion of sparsity, called 'term sparsity', occurring when there are a small number of terms involved in the input problem by comparison with the fully dense case. At last but not least, I will present a very recently developed type of sparsity that we call 'ideal-sparsity', which exploits the presence of equality constraints. Several illustrations will be provided on important applications arising from various fields, including computer arithmetic, robustness of deep networks, quantum entanglement, optimal power-flow, and matrix factorization ranks.[-]
Polynomial optimization methods often encompass many major scalability issues on the practical side. Fortunately, for many real-world problems, we can look at them in the eyes and exploit the inherent data structure arising from the input cost and constraints. The first part of my lecture will focus on the notion of 'correlative sparsity', occurring when there are few correlations between the variables of the input problem. The second part will ...[+]

65F50 ; 90C22 ; 90C23

Bookmarks Report an error