이중 유형에 대한 IEEE 754 준수 sqrt () 구현

Aug 17 2020

double __ieee754_sqrt(double x)하드웨어 명령을 사용하여 첫 번째 근사값을 얻는 함수 를 구현하려고합니다 .

double __ieee754_sqrt(double x) {
    double z;
    /* get reciprocal of the square root (6.75 bits accuracy) */
    __asm(" QSEED.DF %0,%1 \n": "=e" (z):"e" (x):);
    z = 1 / z;
    z = ( z + x / z) / 2; /* 1st Newton-Raphson iteration */
    z = ( z + x / z) / 2; /* 2nd Newton-Raphson iteration */
    z = ( z + x / z) / 2; /* 3rd Newton-Raphson iteration */
    z = ( z + x / z) / 2; /* 4th Newton-Raphson iteration */
    return z;
}

그러나 paranoia.c ( link , link ) 테스트는 다음과 같이 불평합니다.

Square root is neither chopped nor correctly rounded.
Observed errors run from -6.0493828e-01 to 5.0000000e-01 ulps. 

질문 : 추가 로직을 구현하는 방법은 chopping and correct rounding무엇입니까?

UPD. 하드웨어는 기본적으로 sqrt(). 하드웨어는 제곱근 (6.75 비트 정확도)의 역수 획득 만 지원합니다.

UPD2.

  1. 사소한 변경으로 njuffa의 솔루션을 사용했습니다 (많은 감사합니다!) : qseeddf()대신 qseedf()=> fma()대신 사용하십시오 fmaf(). 왜? double<=>float변환 을 생략 하므로 더 빠릅니다.
  2. 예, FMA (Fused Multiply-Add Instruction)는 하드웨어에서 지원됩니다.
  3. Thanks to all for participating in the discussion and for the detailed answers!
  4. To all interested in the topic, here is the list of sqrt() implementations:
    1. From Cygwin math. library (libm): cygwin-snapshot-20200710-1/newlib/libm/math/e_sqrt.c: copyrighted Copyright (C) 1993 by Sun Microsystems.
    2. From GNU C library (glibc):
      1. glibc-2.31/sysdeps/ieee754/dbl-64/e_sqrt.c: entitled IBM Accurate Mathematical Library.
      2. glibc-2.31/sysdeps/powerpc/fpu/e_sqrt.c: using __builtin_fma() functions.

답변

2 njuffa Aug 18 2020 at 06:38

Before embarking on the construction of one's own implementation, it is advisable to search the internet to check if suitable and well-tested open-source code is available.

Common iterative algorithms use division-free iterations for the reciprocal square root to the desired accuracy, back-multiply with the argument to compute the square root, and finally round using the desired rounding mode. Iterations for the reciprocal square root can use either Newton-Raphson iterations with quadratic convergence (roughly doubling the number of correct bits) or Halley iterations with cubic convergence (roughly tripling the number of correct bits). While higher-order iterations exist, they are typically not used.

To keep the code simple, it is advisable to reduce the argument to a single narrow interval comprising two consecutive binades in the case of binary floating-point arithmetic. Note that this generally does not result in the highest performance implementation due to the need for exponent manipulation. For performance reasons, the initial iteration(s) for a double-precision implementation are often performed in single precision.

In the exemplary ISO-C99 implementation below I am showing how a correctly rounded double-precision square root can be implemented along those lines. I am assuming that double maps to IEEE-754 binary64 and that float maps to IEEE-754 binary32. I am restricting to a sqrt implemented with IEEE-754 round-to-nearest-or-even mode.

Very importantly I am assuming that the process hardware provides fused multiply-add instructions and that these are correctly exposed via the standard math library functions fmaf and fma. In comments I had asked for clarification from OP as to the availability of FMA, but decided to get started on the code before feedback was available. Implementations without FMA are possible but much more challenging, and a sufficiently complete treatment would likely exceed the space of a Stackoverflow answer.

Since OP did not specify the target architecture or provide details of the starting approximation, I am using my own starting approximation below based on a polynomial minimax approximation on the interval [0.25, 1] to which all non-exceptional arguments are reduced. qseedf() results are accurate to about 7 bit, so slightly better than OP's built-in hardware. Whether this difference is significant, I cannot assess.

The algorithm, in particular the rounding logic, relies on the ideas of Peter Markstein, therefore I am reasonably confident that the algorithm is correct by construction. I have implemented only very rudimentary testing here. Best industry practice is to mathematically prove the correctness of such algorithms, see publications by David Russinoff and John Harrison for example. In a pinch, one might be able to get away with an exhaustive test across two consecutive binades (feasible these days with a small cluster running for a few days), coupled with random and pattern-based tests exercising all binades.

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>

/* Approximate 1/sqrt(a) on [0.25, 1] with an accuracy of about 7 bits */
float qseedf (float a)
{
    float r;

    r =             -2.43845296f;
    r = fmaf (r, a,  6.22994471f);
    r = fmaf (r, a, -5.91090727f);
    r = fmaf (r, a,  3.11237526f);
    return r;
}

double my_sqrt (double a)
{    
    const double QNAN_INDEFINITE = 0.0 / 0.0;
    const double half = 0.5;
    const double three_eighth = 0.375;
    double refined_rsqrt_approx, sqrt_approx, sqrt_residual, result, b;
    double rsqrt_approx, rsqrt_approx_err, rsqrt_approx_squared, reduced_arg;
    float argf, approxf, approxf_err;
    int e, t, f;

    /* handle normal cases */
    if ((a >= 0) && (a < INFINITY)) {
        /* compute exponent adjustments */
        b = frexp (a, &e);
        t = e - 2*512;
        f = t / 2;
        t = t - 2 * f;
        f = f + 512;

        /* map argument into the primary approximation interval [0.25,1) */
        reduced_arg = ldexp (b, t);
        
        /* Compute initial low-precision approximation */
        argf = (float)reduced_arg;
        approxf = qseedf (argf);
        
        /* Apply two Newton-Raphson iterations with quadratic convergence */
        approxf_err = fmaf (-argf, approxf * approxf, 1.0f);
        approxf = fmaf (0.5f * approxf, approxf_err, approxf);
        approxf_err = fmaf (-argf, approxf * approxf, 1.0f);
        approxf = fmaf (0.5f * approxf, approxf_err, approxf);
        
        /* rsqrt approximation is now accurate to 1 single-precision ulp */
        rsqrt_approx = (double)approxf;

        /* Perform a Halley iteration wih cubic convergence. Based on the work
           of Peter Markstein. See: Peter Markstein, "IA-64 and Elementary 
           Functions", Prentice Hall 2000
        */
        rsqrt_approx_squared = rsqrt_approx * rsqrt_approx;
        rsqrt_approx_err = fma (-reduced_arg, rsqrt_approx_squared, 1.0);
        refined_rsqrt_approx = fma (fma (rsqrt_approx_err, three_eighth, half), 
                                rsqrt_approx * rsqrt_approx_err, rsqrt_approx);
        sqrt_approx = reduced_arg * refined_rsqrt_approx;
        sqrt_residual = fma (-sqrt_approx, sqrt_approx, reduced_arg);
        result = fma (sqrt_residual, half * refined_rsqrt_approx, sqrt_approx);

        /* map back from primary approximation interval by jamming exponent */
        result = ldexp (result, f);
    } else {
        /* handle special cases */
        result = (a < 0) ? QNAN_INDEFINITE : (a + a);
    }
    return result;
}

/*
  https://groups.google.com/forum/#!original/comp.lang.c/qFv18ql_WlU/IK8KGZZFJx4J
  From: geo <[email protected]>
  Newsgroups: sci.math,comp.lang.c,comp.lang.fortran
  Subject: 64-bit KISS RNGs
  Date: Sat, 28 Feb 2009 04:30:48 -0800 (PST)

  This 64-bit KISS RNG has three components, each nearly
  good enough to serve alone.    The components are:
  Multiply-With-Carry (MWC), period (2^121+2^63-1)
  Xorshift (XSH), period 2^64-1
  Congruential (CNG), period 2^64
*/
static uint64_t kiss64_x = 1234567890987654321ULL;
static uint64_t kiss64_c = 123456123456123456ULL;
static uint64_t kiss64_y = 362436362436362436ULL;
static uint64_t kiss64_z = 1066149217761810ULL;
static uint64_t kiss64_t;
#define MWC64  (kiss64_t = (kiss64_x << 58) + kiss64_c, \
                kiss64_c = (kiss64_x >> 6), kiss64_x += kiss64_t, \
                kiss64_c += (kiss64_x < kiss64_t), kiss64_x)
#define XSH64  (kiss64_y ^= (kiss64_y << 13), kiss64_y ^= (kiss64_y >> 17), \
                kiss64_y ^= (kiss64_y << 43))
#define CNG64  (kiss64_z = 6906969069ULL * kiss64_z + 1234567ULL)
#define KISS64 (MWC64 + XSH64 + CNG64)

int main (void)
{
    const uint64_t N = 10000000000ULL; /* desired number of test cases */
    double arg, ref, res;
    uint64_t argi, refi, resi, count = 0;
    double spec[] = {0, 1, INFINITY, NAN};

    printf ("test a few special cases:\n");
    for (int i = 0; i < sizeof (spec)/sizeof(spec[0]); i++) {
        printf ("my_sqrt(%22.13a) = %22.13a\n", spec[i], my_sqrt(spec[i]));
        printf ("my_sqrt(%22.13a) = %22.13a\n", -spec[i], my_sqrt(-spec[i]));
    }
    
    printf ("test %llu random cases:\n", N);
    do {
        count++;
        argi = KISS64;
        memcpy (&arg, &argi, sizeof arg);
        res = my_sqrt (arg);
        ref = sqrt (arg);
        memcpy (&resi, &res, sizeof resi);
        memcpy (&refi, &ref, sizeof refi);
        if (resi != refi) {
            printf ("\rerror @ arg=%22.13a  res=%22.13a  ref=%22.13a\n",
                    arg, res, ref);
            return EXIT_FAILURE;
        }
        if ((count & 0xfffff) == 0) printf ("\r[%llu]", count);
    } while (count < N);
    printf ("\r[%llu]", count);
    printf ("\ntests PASSED\n");
    return EXIT_SUCCESS;
}

The output of the above program should look similar to this:

test a few special cases:
my_sqrt(  0x0.0000000000000p+0) =   0x0.0000000000000p+0
my_sqrt( -0x0.0000000000000p+0) =  -0x0.0000000000000p+0
my_sqrt(  0x1.0000000000000p+0) =   0x1.0000000000000p+0
my_sqrt( -0x1.0000000000000p+0) =  -0x1.#IND000000000p+0
my_sqrt(  0x1.#INF000000000p+0) =   0x1.#INF000000000p+0
my_sqrt( -0x1.#INF000000000p+0) =  -0x1.#IND000000000p+0
my_sqrt(  0x1.#QNAN00000000p+0) =   0x1.#QNAN00000000p+0
my_sqrt( -0x1.#QNAN00000000p+0) =  -0x1.#QNAN00000000p+0
test 10000000000 random cases:
[10000000000]
tests PASSED
1 RickJames Aug 18 2020 at 01:08
z = 1 / z;
z = ( z + x / z) / 2; /* 1st Newton-Raphson iteration */
...

-->

z = 1 / z;
z += ( x / z - z) * 0.5; /* 1st Newton-Raphson iteration */
...

This may be faster.

And stop one iteration sooner (I think.)

When you stop, compare z*z and x. The z*z will be (I think) no smaller than x. Subtrace 1ulp from the z and check z*z vs x. It is not a perfect check of "correct rounding", but it may be "good enough" to decide between z and z - 1ulp.

Since you got such a large range of errors, I worry that the rest of the floating point 'hardware' is sloppy when it comes to rounding, or even precision.

Oops, I forgot. There was a reason for giving you an approximation to 1/z -- Continue to approximate 1/z; you can do it with multiplies instead of divides, thereby being (in most hardware) significantly faster and possibly with less roundoff.

z = ( z + x * z) * 0.5; /* 1st Newton-Raphson iteration */
...
z = 1 / z;

Also, see if there is a way to decrement the exponent instead of doing a multiply for / 2.