Elementary functions, digital filters and beyond:
Symbolic-numerical methods for code generation & transformation

Sorbonne Université / LIP6

12 and 13 march 2018

Location - How to get there

The workshop takes place in the campus Jussieu of Sorbonne Université (previously Université Pierre et Marie Curie).
In the campus, the workshop takes place on the 24th floor of the Zamansky Tower (the big tower in the middle of the campus, you can find it very easily from the entry).

Registration

Don't forget to register here (it is free but mandatory, for logistical reasons).

Programme

This programme includes talks and demos .
Monday 12/03
Time Topic
8:30 Welcome + coffee
Current state of the Metalibm effort
9:00 Sollya and python-sollya (Ch. Lauter)

The development of certain applications using floating-point or fixed-point arithmetic requires a safe and rigorous development toolbench. Such applications include for example mathematical (libm) functions (such as exp, sin, cos) as well as digital filters for signal processing. The tool needs to support, amongst other features, rigorous function (expression) evaluation through faithful rounding, simulation of the IEEE754 roundings and operations or polynomial approximation. In this talk, we present the Sollya 6.0 software tool that has been designed for that purpose. As its key feature, Sollya provides multiple-precision faithful rounding of composite expressions through multiple-precision interval arithmetic with dynamic precision adaption and interval arithmetic. This feature comes together with a fully developed scripting language, library and dynamic extension features, certain Computer-Algebra-System features, polynomial approximation algorithms and lots of other features. We shall start the talk with a live demo of the Sollya software tool, giving an overview on the its features. We shall then discuss in more detail some of the algorithms behind the faithfully-rounded and interval arithmetic evaluation machinery inside Sollya.

9:30 Meta-implementation of vectorized logarithm function in binary floating-point arithmetic (H. de Lassus Saint-Geniès)

Besides scalar instructions, modern micro-architectures also provide support for vector instructions. They enable to treat packed inputs (typically 4 or 8) in a single instruction. The challenge is now to write vector programs to support mathematical functions like sin, cos, exp, log, … which efficiently exploit those vector instructions. This talk focuses on the design of vectorized implementation of log(x) function, and more particularly on its automation for different formats and micro-architectures. First it rewrites a classical range reduction in a branchless fashion so as to use at best recent micro-architecture features, like rcp (reciprocal) instruction, and to treat all inputs in the same flow. Second it details rigorously how to achieve “faithfully rounded” implementations. Third it shows how to automate this implementation process using the MetaLibm framework, on SSE/AVX and AVX2 supporting micro-architectures. Finally we illustrate that this process enables to achieve high throughput implementations for the binary32 and binary64 formats in a fully automated way.

10:00 From filter to code: an overview (slides) (Th. Hilaire)

We will present a complete flow that transform linear signal processing (or control) algorithms into fixed-point (or floating-point) implementations with a reliable bound on the output error. Starting with mathematical specification, our flow considers the several equivalent algorithms used to implement those filters, reorganization of the computation reorder, the bit-width optimization, etc. Our approach is based on basic bricks like a reliable evaluation of the magnitude of each variable (in order to define their fixed-point format and guaranty that no overflow will occur), an error analysis to bound the output error (difference between the exact algorithm and the implemented one), a bit-width optimization (smallest bit-width that still guaranty the error to be bounded by a given epsilon), and fixed-point code generation (C code with integers, or VHDL operator).

10:30 Coffee break
11:00 Mathlib: Do It Yourself (slides) (V. Innocente)

Scientists and engineers have the need to implement mathematical functions that best fit their needs including tuning for precision, validity range, target architecture, software environment and language. We present our work to provide a tool for scientists integrated into the analysis eco-system deployed in High Energy Physics in particular as provided by CERN. Besides some details of the implementation, we will specifically address support for architecture heterogeneity including full bitwise reproducibility of results.

11:30 Metalibm-Lutetia (Ch. Lauter)

A typical floating-point environment includes support for a small set of about 30 mathematical functions such as exponential, logarithm, trigonometric and hyperbolic functions. These functions are provided by mathematical software libraries (libm), typically in IEEE754 single, double and quad precision. This talk suggests to replace this libm paradigm by a more general approach: the on-demand generation of numerical function code, on arbitrary domains and with arbitrary accuracies. First, such code generation opens up the libm function space available to programmers. It may capture a much wider set of functions, and may capture even standard functions on non-standard domains and accuracy/performance points. Second, writing libm code requires fine-tuned instruction selection and scheduling for performance, and sophisticated floating-point techniques for accuracy. Automating this task through code generation improves confidence in the code while enabling better design space exploration, and therefore better time to market, even for the libm functions. This presentation discusses the new challenges of this paradigm shift, and presents the current state of open-source function code generators available on http://www.metalibm.org/. The presentation further opens up on a discussion whether code generation for other numerical objects, such as linear time invariant filters is possible and where the synergies between all these code generation projects are.

12:00 Lunch
Program analysis
13:30 Enclosures of Roundoff Errors using Semidefinite Programming (V. Magron)

Roundoff errors cannot be avoided when implementing numerical programs with finite precision. The ability to reason about rounding is especially important if one wants to explore a range of potential representations, for instance in the world of FPGAs. This problem becomes challenging when the program does not employ solely linear operations as non-linearities are inherent to many computational problems in real-world applications. We present two frameworks which can be combined to provide interval enclosures of upper bounds for absolute roundoff errors. The framework for upper bounds, related to the Real2Float software package, is based on optimization techniques employing semidefinite programming (SDP) and sparse sums of squares certificates, which can be formally checked inside the Coq theorem prover. The framework for lower bounds, related to the FPSDP software package, is based on a new hierarchy of convergent robust SDP approximations for certain classes of polynomial optimization problems. Each problem in this hierarchy can be exactly solved via SDP. Our tool covers a wide range of nonlinear programs, including polynomials and transcendental operations as well as conditional statements. We illustrate the efficiency and precision of this tool on non-trivial programs coming from biology, optimization and space control.

14:00 Daisy - a framework for sound accuracy analysis and optimization of numerical programs (slides) (A. Izycheva)

Floating-point or fixed-point computations are an integral part of many embedded and scientific computing applications, as are the roundoff errors they introduce. They expose an interesting tradeoff between efficiency and accuracy: the more precision we choose, the closer the results will be to the ideal real arithmetic, but the more costly the computation becomes. Unfortunately, the unintuitive and complex nature of finite-precision arithmetic makes manual optimization infeasible such that automated tool support is indispensable. This talk presents an overview of Daisy, a framework for sound accuracy analysis and optimization of finite-precision programs. We will provide a high-level view of its main features: roundoff error analysis as well as rewriting and mixed-precision optimization.

14:30 Salsa: An Automatic Tool to Improve the Numerical Accuracy of Programs (slides) (N. Damouche)

Salsa is an automatic tool that improves the accuracy of the floating-point computations done in numerical codes. Based on static analysis methods by abstract interpretation, our tool takes as input an original program, applies to it a set of transformations and then generates an optimized program which is more accurate than the initial one. The original and the transformed programs are written in the same imperative language. This article is a concise description of former work on the techniques implemented in Salsa, extended with a presentation of the main software architecture, the inputs and outputs of the tool as well as experimental results obtained by applying our tool on a set of sample programs coming from embedded systems and numerical analysis.

15:00 Coffee break
Fixed-point implementation of floating-point functions
15:30 Computing floating-point functions with fixed-point operations (slides) (F. de Dinechin)

Elementary functions from the mathematical library input and output floating-point numbers. However it is possible to implement them purely using integer/fixed-point arithmetic. This option was not attractive between 1985 and 2005, because mainstream processor hardware supported 64- bit floating-point, but only 32-bit integers. This has changed in recent years, in particular with the generalization of native 64-bit integer support. The purpose of this article is therefore to reevaluate the relevance of computing floating-point functions in fixed-point. For this, several variants of the double-precision logarithm function are implemented and evaluated. Formulating the problem as a fixed-point one is easy after the range has been (classically) reduced. Then, 64-bit integers provide slightly more accuracy than 53-bit mantissa, which helps speed up the evaluation. Finally, multi-word arithmetic, critical for accurate implementations, is much faster in fixed- point, and natively supported by recent compilers. Thanks to all this, a purely integer implementation of the correctly rounded double-precision logarithm outperforms the previous state of the art, with the worst-case execution time reduced by a factor 5. This work also introduces variants of the logarithm that input a floating-point number and output the result in fixed-point. These are shown to be both more accurate and more efficient than the traditional floating-point functions for some applications.

16:00 Generating hardware Block Floating-Point implementations with metalibm-lugdunum (slides) (N. Brunie)

Generating RTL implementation of a floating-point function often ends up in the implementation of fixed-point kernels. Metalibm lugdunum has been extended with RTL generation and validation capabilities. Support for fixed-point data types has also been added. After reviewing the modifications and improvements done to Metalibm core and code generation pipeline we will examplify theses features with the prototyping of mixed-precision FMA, exact accumulation FMA and floating-point division. Our goal is to demonstrate how the mixing of input language extension, legalizing passes and RTL validation features can be used to quickly prototype floating-point function based on fixed-point implementation.

16:30 Coffee break
17:00 FloatApprox: a toolbox for creating faithfully rounded floating-point function approximators in FPGAs (slides) (D. Thomas)

This talk presents a method for automatically creating high performance pipelined floating-point function approximations for FPGAs. Both input and output are floating-point, but internally the function approximator uses fixed-point polynomial segments, guaranteeing a faithfully rounded output. A robust and automated non-uniform segmentation scheme is used to segment any twice-differentiable input function and produce platform-independent VHDL. The approach works on arbitrary functions, including those with no closed-form solution, and is device independent.

17:30 Towards IIR filters computing just right (slides) (F. de Dinechin)

Linear Time Invariant (LTI) filters are often specified and simulated using high-precision software, before being implemented in low-precision fixed-point hardware. A problem is that the hardware does not behave exactly as the simulation due to quantization and rounding issues. This article advocates the construction of LTI architectures that behave as if the computation was performed with infinite accuracy, then rounded only once to the low-precision output format. From this minimalist specification, it is possible to deduce the optimal values of many architectural parameters, including all the internal data formats. This requires a detailed error analysis that captures not only the rounding errors but also their infinite accumulation in recursive filters. This error analysis then guides the design of hardware satisfying the accuracy specification at the minimal hardware cost. We detail this generic methodology for the case of low-precision LTI filters in the Direct Form I implemented in FPGA logic. This approach is demonstrated by a fully automated and open-source architecture generator tool, and validated on a range of Infinite Impulse Response filters.

19:00 Apero + dinner
Tuesday 13/03
Time Topic
8:30 Welcome + coffee
Approximation
9:00 Polynomial approximation with real, fixed-point, or floating-point coefficients. A small review. (S. Chevillard)

Polynomial approximation, or more generally approximation in a vector space of functions of finite dimension, plays an important role in, e.g., the design of code for numerically evaluating mathematical functions, or the design of finite impulse response filters. In this talk, I will present Remez algorithm to find the best approximation to a function with respect to the sup norm with a polynomial of a given degree and I will describe its extensions to other spaces, like Parks-McClellan algorithm for trigonometric polynomials. Specific constraints appear in practice like fixing the value of some coefficients, or constraining the number of bits on which some coefficients should be stored. We will see how these constraints may be tackled, for instance with the fpminimax algorithm or linear programming techniques. The talk will be illustrated with practical examples studied with the software tool Sollya.

9:30 Black-box approximation of bivariate functions (slides) (D. Thomas)

There is often a need to evaluate bivariate functions in FPGAs, using as little area as possible while maintaining full throughput. Custom solutions are available for certain functions such as atan2(x,y), but for an arbitrary f(x,y) there is no general solution for automatically creating a hardware operator. This talk outlines a black-box method for bivariate function approximation, which is intended to work with bivariate input ranges up to 24-bits, and to provide faithfully rounded solutions at up to 24 bits of accuracy. In order to minimise RAM and DSP usage, this requires high-order polynomial patches, as well as non-linear range-reduction which can adapt to local curvature. Initial results show the method is quite general, and can handle functions including atan2, incomplete gamma, bessel functions, and inverses of those functions in a practical number of resources (i.e. it doesn’t ue up the whole FPGA)

10:00 Robust rational approximation with barycentric representations (slides) (S. Filip)

Rational approximation is historically a core topic in approximation theory with applications in fields such as computer arithmetic, signal processing and model order reduction. In this talk I will discuss about recent work with my collaborators on designing robust algorithms for computing best rational approximations to continuous functions over an interval. A core aspect of this work is to consider rational functions (of type (n,n)) using a so-called barycentric representation. We indicate how these representations can be taken in an adaptive, problem dependent way, that greatly reduces the underlying numerical precision needed to obtain accurate results. Comparisons with more common representations involving ratios of polynomials represented in monomial or Chebyshev bases enforce this claim.

10:30 Coffee break
Digital filters
11:00 Abstracting Vector Architectures in Library Generators: Case Study of Convolution Filters (A. Stojanov)

We present FGen, a program generator for high performance convolution operations (finite-impulse-response filters). The generator uses an internal mathematical DSL to enable structural optimization at a high level of abstraction. We use FGen as a testbed to demonstrate how to provide modular and extensible support for modern SIMD vector architectures in a DSL-based generator. Specifically, we show how to combine staging and generic programming with type classes to abstract over both the data type (real or complex) and the target architecture (e.g., SSE or AVX) when mapping DSL expressions to C code with explicit vector intrinsics. Benchmarks shows that the generated code is highly competitive with commercial libraries.

11:30 When computer arithmetic and signal processing meet (slides) (Th. Hilaire)

In this presentation, we will exhibit several reliable tools used to transform lineal signal processing (or control) algorithms into finite precision algorithms: - reliable determination of the range of the variable (and then their most significant bit) - reliable determination of the transfer function error (error between the exact and implemented filter) - reliable computation of the Worst-Case Peak-Gain, used as a basic brick for the other computation

12:00 Lunch
Elementary function case studies
13:30 Exact Lookup Tables for the Evaluation of Trigonometric and Hyperbolic Function (G. Revy)

Elementary mathematical functions are pervasively used in many applications such as electronic calculators, computer simulations, or critical embedded systems. Their evaluation is always an approximation, which usually makes use of mathematical properties, precomputed tabulated values, and polynomial approximations. Each step generally combines error of approximation and error of evaluation on finite-precision arithmetic. When they are used, tabulated values generally embed rounding error inherent to the transcendence of elementary functions. In this article, we propose a general method to use error-free values that is worthy when two or more terms have to be tabulated in each table row. For the trigonometric and hyperbolic functions, we show that Pythagorean triples can lead to such tables in little time and memory usage. When targeting correct rounding in double precision for the same functions, we also show that this method saves memory and floating-point operations by up to 29% and 42%, respectively.

14:00 Contribution 101: The GNU C Library and its libm (A. Shankar)

The GNU C Library (glibc) implements the core runtime of GNU/Linux systems providing, among other libraries, a libm. In this talk, I will speak about glibc and its libm implementation, share ideas for how it can be improved, and walk through the process of contributing to glibc and its libm. The project needs contributions from experts, like you, interested in improving the quality of the numerical methods use by libm. We can offer only the satisfaction of seeing one's ideas and research used in one of the most widely deployed math libraries in the world.

14:30 A Sagenstein experience: implementing argerfi (Ch. Lauter)

When some special function is not defined by the classical mathematical system libraries, such as libm, researchers who need it, for example to run some physical experiment and handle the data it produces, may be in trouble. The metalibm-lutetia tool is intended to be a code generation tool for open-ended mathematical functions. It might hence be used to respond in a such a case, when a user requires the implementation of some not-yet implemented function. Alone, the metalibm-lutetia tool however can only cover compositions of base functions that are already known to the tool. As a matter of fact, code generation for some mathematical function requires code to evaluate that function: we have a classical chicken-and-egg problem. In this talk, we present a recent experience that shows how this problem can be overcome. The ore_algebra package, written for the Sage Computer Algebra system, allows functions defined by a certain family of differential equations to be evaluated. By binding metalibm-lutetia to Sage, we succeeded in implementing, on a very short timeframe, the argerfi special function that we were asked to implement by a scientific colleague in Physics.

15:00 Coffee break
Open issues, collaborations, and works in progress