# Vectorized Matrix Multiplication - 2022.1 English

## AI Engine Kernel Coding Best Practices Guide (UG1079)

Document ID
UG1079
Release Date
2022-05-25
Version
2022.1 English
The following is an example of ```(64 * 64) x (64 * 64)``` `int8 x int8` matrix multiplication kernel code. The matrix multiplication shape is `4*16*8`. The input data is reshaped for matrix multiplication.
``````const int SHIFT=10;
//For element mmul
const int M=4;
const int K=16;
const int N=8;

//Total matrix sizes
const int rowA=64;
const int colA=64;
const int colB=64;

//mmul numbers
const int num_rowA=rowA/M;
const int num_colA=colA/K;
const int num_colB=colB/N;

void matrix_mul(input_window<int8> * __restrict matA, input_window<int8> * __restrict matB, output_window<int8> * __restrict matC){
using MMUL = aie::mmul<M, K, N, int8, int8>;
const int8* __restrict pA=(int8*)matA->ptr;
const int8* __restrict pB=(int8*)matB->ptr;
int8* __restrict pC=(int8*)matC->ptr;

//For profiling only
unsigned cycle_num[2];
aie::tile tile=aie::tile::current();
cycle_num[0]=tile.cycles();//cycle counter of the AI Engine tile

int8 * __restrict pC1 = pC;
for (unsigned i = 0; i < num_rowA; i++) {//for output row number of element matrix
for (unsigned j = 0; j < num_colB; j++) {//for output col number of element matrix
const int8 * __restrict pA1 = pA + ( i * num_colA + 0) * MMUL::size_A;
const int8 * __restrict pB1 = pB + ( 0 * num_colB + j) * MMUL::size_B;

aie::vector<int8, MMUL::size_A> A0 = aie::load_v<MMUL::size_A>(pA1); pA1 += MMUL::size_A;
aie::vector<int8, MMUL::size_B> B0 = aie::load_v<MMUL::size_B>(pB1); pB1 += MMUL::size_B * num_colB;

MMUL C00;
C00.mul(A0, B0);

for (unsigned k = 0; k < num_colA-1; k++) chess_prepare_for_pipelining {
A0 = aie::load_v<MMUL::size_A>(pA1); pA1 += MMUL::size_A;
B0 = aie::load_v<MMUL::size_B>(pB1); pB1 += MMUL::size_B * num_colB;
C00.mac(A0, B0);
}

aie::store_v(pC1, C00.template to_vector<int8>(SHIFT)); pC1 += MMUL::size_C;
}
}

//For profiling only
cycle_num[1]=tile.cycles();//cycle counter of the AI Engine tile
printf("start=%d,end=%d,total=%d\n",cycle_num[0],cycle_num[1],cycle_num[1]-cycle_num[0]);
}``````

The profiled result shows that the loop takes around 5500 cycles. In total, this is 64*64*64=262144 multiplications on `int8*int8` data type, which is 262144/5500 ~=48 `int8*int8` MAC operations per cycle.

Add `-v` to `aiecompiler`, and look at the compilation of kernel log in `Work/aie/<COL_ROW>/<COL_ROW>.log`:
``````HW do-loop #765 in ".../matrix_mul.cc", line 43: (loop #13) :
critical cycle of length 4 : b97 -> b99 -> b101 -> b102 -> b103 -> b104 -> b97
minimum length due to resources: 4
scheduling HW do-loop #765
(algo 1a) -> # cycles: 13
(modulo) -> # cycles: 4i 5i 6 ok (required budget ratio: 1)
(resume algo) -> after folding: 6 (folded over 2 iterations)
-> HW do-loop #765 in ".../Vitis/2022.1/aietools/include/aie_api/detail/aie1/mmul_8_8.hpp", line 278: (loop #13) : 6 cycles``````
The resource limitation is 4, however it takes 6 cycles for each loop iteration for the inner most loop. Besides, the inner loop only has loop count `num_colA-1=3`. So it is worthwhile to see if flattening the loop and letting tool pipeline larger amount of instructions in outer loop helps. The adjusted instructions for the loop is as follows:
``````for (unsigned i = 0; i < num_rowA; i++) {
for (unsigned j = 0; j < num_colB; j++) chess_prepare_for_pipelining {
const int8 * __restrict pA1 = pA + ( i * num_colA + 0) * MMUL::size_A;
const int8 * __restrict pB1 = pB + ( 0 * num_colB + j) * MMUL::size_B;
aie::vector<int8, MMUL::size_A> A0 = aie::load_v<MMUL::size_A>(pA1); pA1 += MMUL::size_A;
aie::vector<int8, MMUL::size_B> B0 = aie::load_v<MMUL::size_B>(pB1); pB1 += MMUL::size_B * num_colB;

MMUL C00; C00.mul(A0, B0);
for (unsigned k = 0; k < num_colA-1; k++) chess_flatten_loop {
A0 = aie::load_v<MMUL::size_A>(pA1); pA1 += MMUL::size_A;
B0 = aie::load_v<MMUL::size_B>(pB1); pB1 += MMUL::size_B * num_colB;
C00.mac(A0, B0);
}
aie::store_v(pC1, C00.template to_vector<int8>(SHIFT)); pC1 += MMUL::size_C;
}
}``````

With above instruction adjustment, the achieved latency of the loop is around 3472 cycles, which is roughly 262144/3472 ~=61 `int8*int8` MACs per cycle.