Linkedin announced the open-sourcing of GDMix, a framework designed for on-time training of AI personalization models efficiently. The company spokesperson said that it is an improvement over its past release in the space, Photon ML since it supports deep learning models.
GDMix’s breakdown approach for faster training of fixed effect and random effect models
GDMix is used for training fixed effect and random effect models, used in search and recommender systems. It advances the process by dividing large models into fixed effects and random effects, moving ahead to solve them individually. This breakdown approach helps in faster training of models with community hardware. It also removes the requirement of specialized processor, memory, and networking equipment.
As per Linkedin, a 10 to 40% improvement in training speed on various datasets was achieved due to the use of TensorFlow for data reading and gradient computation. In comparison to PhotonML, it is more efficient in training and evaluating models automatically. It can handle models in the order of hundreds to millions.
GDMix used along with DeText to train the global fixed effect model
DeText, a toolkit used for ranking based on textual features can also be used within GDMix to train as a global fixed-effect model. It emphasized semantic matching with the use of deep neural networks to help understand the member intents in recommender systems.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
Presently, GDMix is used to support logistic regression models whereas deep natural model DeText supports. Also, as of now, arbitrary model users are designing and training outside of GDMix.
Linkedin latest announcement after the release of LiFT
Linkedin’s announcement of open-sourcing GDMix comes after it recently released a toolkit that measures AI model fairness, Linkedin Fairness Toolkit (LiFT). This toolkit can be used during training to measure any bias in corpora.
It can also assess model fairness whilst it detects differences in performances across subgroups. Linkedin stated that LiFT can also be applied internally to measure the fairness training metrics of datasets for models before training.