The AltiVec Difference
Pages: 1, 2
Scaling and Translation
Mathematically, this operation is represented by:
y = ax + b
where x and y are vectors and a and b are constants. In this operation, x is scaled by a factor of a and then shifted by b. With scalar computations, this can be implemented as:
for (i=1; i<=n; i++) {
y[i]=alpha*x[i]+beta;
}
where x, y, alpha and beta are defined as floats. Using AltiVec’s vector multiplyadd instruction, the same operation can be written as:
for (i=1; i<=n/4; i++) {
y[i]=vec_madd(alphaV,x[i],betaV);
}
Here, x and y are vector floats, as are alphaV and betaV:
alphaV = (alpha, alpha, alpha, alpha)
betaV = (beta, beta, beta, beta)
These fourelement vectors were created from alpha and beta in order to use them in the vec_madd
function. On the PowerBook G4, again with n=1000, the results are 198 MFLOPS for the scalar computation and 386 MFLOPS for the vector computation. Here, the performance gain from AltiVec is a factor of 1.9.
2D Rotation
In this example, data points in a 2D xy plane are mapped into a new uv coordinate system through a rotation of axes by an angle q. The equations for this transformation are:
u = x cosq + y sinq
v = –x sinq + y cosq
To carry out this mapping transformation for a collection of “n” (x,y) points, the following scalar computation can be used:
for (i=1; i<=n; i++) {
u[i]=x[i]*c+y[i]*s;
v[i]=x[i]*s+y[i]*c;
}
where s = sinq and c = cosq. Using AltiVec’s vector multiplyadd function, the computation can be vectorized as follows:
for (i=1; i<=n/4; i++) {
u[i]=vec_madd(x[i],cV,zeroV);
u[i]=vec_madd(y[i],sV,u[i]);
v[i]=vec_madd(x[i],msV,zeroV);
v[i]=vec_madd(y[i],cV,v[i]);
}
As before, the following vector floats were created for this computation:
cV=(c, c, c, c)
sV=(s, s, s, s)
msV=(–s, –s, –s, –s)
zeroV=(0., 0., 0., 0.)
For n=1000, the two approaches give the following performance: 300 MFLOPS for the scalar computation, and 472 MFLOPS for the vector computation, a factor of 1.6 performance increase from AltiVec.
Matrix Multiplication
The last example involves the multiplication of n x n matrices A and B to form the n x n matrix C:
In this multiplication, the ij element of matrix C would be formed by the ith row – jth column dot product between matrices A and B as follows:
Using a scalar computation, the matrix multiplication can be structured as follows:
for (i=1; i<=n; i++) {
for (j=1; j<=n; j++) {
for (k=1; k<=n; k++) {
c[i,j]=c[i,j]+a[i,k]*b[k,j];
}
}
}
This algorithm involves 2n^{3} floatingpoint operations, carried out by a total of 6n^{3} instructions (3 loads, an add, a multiply, and a store are required each time the computation is evaluated). The easiest way to implement this matrix multiplication using AltiVec is to vectorize the inner “k” loop to use a vector multiplyadd function. Conceptually, this would look like:
for (i=1; i<=n; i++) {
for (j=1; j<=n; j++) {
for (k=1; k<=n/4; k++) {
c[j+(i1)*n]=vector_madd(a[k+(i1)*n],b[j+(k1)*n],c[j+(i1)*n]);
}
}
}
Here, the 2D array’s [i,j] index has been replaced by a single index [j+(11)n] that threads through the data in "vector" form. Using the multiplyadd, the a_{ik}b_{kj} multiplication is carried out and added to c_{ij}, repeatedly, in vector form. AltiVec would march through this computation four instructions at a time, until all of the a_{ik}b_{kj} products have been carried out.
In reality, the algorithm is not so simple, and neither is the contraction of indices  a few additional lines of code are required to stuff the multidimensional arrays into Altivec vector functions. But the end result is quite good. On the PowerBook G4 with n=200, the scalar computation runs at 84 MFLOPS, while the AltiVec version runs at 384 MFLOPS, a factor of 4.6 improvement. An even better matrix multiplication algorithm available from Apple boosts that performance to a whopping 681 MFLOPS, an improvement close to a factor of 8.
This is a good example of how AltiVec can really pay off for complex computations. For a more detailed look at the AltiVec matrix multiplication code, see Apple’s Developer Web site.
One final note  each of these examples implied that n was evenly divisible by four, but that is not a requirement. Arrays can be padded with dummy elements (rounding the array size up to the next largest multiple of 4), or a scalar "cleanup" loop can be used to pick up any leftovers. An example is shown below:
for (i=1; i<=n/4; i++) {
z[i]=vec_add(x[i],y[i]);
}
m=mod(n,4)
for (i=nm+1; i<=n; i++) {
z[i]=x[i]+y[i];
}
Here, m=mod(n,4) is the remainder of n/4, either 0, 1, 2, or 3. In cases where n is evenly divisible by 4, m=0 and the scalar cleanup loop will be bypassed.
Final Thoughts
This article took a brief look at AltiVec and illustrated some of the performance gains available with this technology. For specific applications, you need to take a close look at the code and see where AltiVec can be used. In many cases, using AltiVec will involve very minor code changes  more of a code tuneup  while other cases will require indepth code restructuring and rewriting to allow for vectorization. How well AltiVec works will depend on many factors, but if your code hinges on key computations that can be vectorized, AltiVec will surely make a difference.
Craig Hunter is an aerospace engineer at NASA Langley Research Center in Hampton, Virg.
Return to the Mac DevCenter.

The real world
20030524 23:02:26 anonymous2 [View]

Its like horsepoer vs. torque
20021205 14:50:06 anonymous2 [View]

simd
20020410 08:03:17 psheldon [View]

found your NASA trade study asking Jeeves on cactus then fortran90
20020413 14:02:55 psheldon [View]

BLAS for Altivec
20020409 16:17:48 professorxavier [View]