TABLA is an innovative framework that generates accelerators for a class of machine learning algorithms. TABLA provides a comprehensive solution - from programming language to circuit level - to diminishing performance benefits from general-purpose processors. The beauty of TABLA is its level of abstraction; programmers need not worry about underlying hardware to utilize the full benefits of hardware acceleration.
TABLA leverages the insight that many statistical machine learning algorithms can be expressed as stochastic optimization problems. Hence, the programmer's only job is to provide a high level implementation of the gradient of the objective function, while letting the framework generate the accelerator.
Our in-house programming language makes it significantly easier for programmers to specify the gradient of the objective function. It includes language constructs and keywords that are commonly seen in several machine learning algorithms. The language allows programmers to declare different types of data, such as model input and output.
TABLA consists of accelerator templates that can dynamically be added to the acceelerator architecture. Programmer-specified machine learning algorithm determines the configuration of the accelerator template. These templates, designed by expert hardware designers, are in synthesizable Verilog format.
The model compiler statically generates the execution schedule for the accelerator. Internally, it first generates the data flow graph of the gradient function, an effective tool for analyzing data dependency. Then, the most optimal schedule is generated by using ML-RCS algorithm. This ensures the algorithm is executed with the maximum level of parallelism.