TMVA SOFIE - Enhancing Keras Parser and JAX/FLAX Integration

Description

SOFIE (System for Optimized Fast Inference code Emit) is a Machine Learning Inference Engine within TMVA (Toolkit for Multivariate Data Analysis) in ROOT. SOFIE offers a parser capable of converting ML models trained in Keras, PyTorch, or ONNX format into its own Intermediate Representation, and generates C++ functions that can be easily invoked for fast inference of trained neural networks. Using the IR, SOFIE can produce C++ header files that can be seamlessly included and used in a ‘plug-and-go’ style.

SOFIE currently supports various Machine Learning operators defined by the ONNX standards, as well as a Graph Neural Network (GNN) implementation. It supports the parsing and inference of Graph Neural Networks trained using DeepMind Graph Nets.

As SOFIE continues to evolve, this project aims to:

Task ideas

In this project, the contributor will gain experience with C++ and Python programming, TensorFlow/Keras and its storage formats for trained machine learning models, and JAX/FLAX for accelerated machine learning. They will begin by familiarizing themselves with SOFIE and its Keras parser. After researching the changes required to support the latest TensorFlow version, they will implement functionalities to ensure the successful generation of inference code for the latest Keras models. In the next phase, they will explore the JAX/FLAX library and investigate its potential integration with SOFIE.

Expected results and milestones

Requirements

Mentors

Additional Information

Corresponding Project

Participating Organizations