The KPN is widely used as a distributed programming model to run tasks in parallel whenever possible. This white paper describes how the AI Engine uses the KPN model for graph programming. There are various types of computation models based on the target architecture such as central processing unit (CPU), graphics processing unit (GPU), FPGA, and AI Engine programming. The following figure shows the models of computation classified as sequential, concurrent, and functional models.
In sequential models, tasks are executed one after another or sequentially. In concurrent models, tasks are executed in parallel whenever possible. In functional models, tasks are implementation dependent, such as targeting a specific architecture, such as a GPU or the programmable logic in FPGAs. The focus of this white paper is the computation model of AI Engine programming. This model can be used to guide the programmer when writing the program that targets the AI Engine architecture. The aim is to fully leverage the computing power of the AI Engine by understanding its programming model. As the complexity of computational tasks have become more challenging, the standard processor has proven insufficient in performing these tasks efficiently. In response, various computational architectures have evolved to address this shortcoming such as CPUs, GPUs, application specific processors, etc.