Publish Your Keras Models On Kaggle And Hugging Face

Sedang Trending 1 bulan yang lalu

An pembaruan from Kaggle

Kaggle Models opened a year ago and is already home to ~4,000 pre-trained models from a wide range of organizations. And now, Kaggle Models is opening up for user contributions, with Keras model uploads. If you fine-tuned a model that you want to share with the world, here is how it works:

import keras import keras_nlp # Load the model gemma = keras_nlp.models.CausalLM.from_preset("gemma_1.1_instruct_7b_en") # Fine tune the model here. Example: Gemma fine-tuned to # speak like a pirate. See bit.ly/gemma-pirate-demo # ... # Save the finetuned model as a KerasNLP preset. gemma.save_to_preset("./gemma-pirate-instruct-7b") # Upload the preset as a new model variant on Kaggle kaggle_uri = "kaggle://my_kaggle_username/gemma-pirate/keras/gemma-pirate-instruct-7b" keras_nlp.upload_preset(kaggle_uri, "./gemma-pirate-instruct-7b")

People will be able to load your model using the exact same .from_preset(url) call as if loading the original Gemma, as long as they point the URL at your custom fine-tuned version.

gemma = keras_nlp.models.CausalLM.from_preset("kaggle://my_kaggle_username/gemma-pirate/keras/gemma-pirate-instruct-7b")

To make the discovery of user-uploaded models easier, Kaggle provides model pages for them, where you can add a description, details about the dataset used for fine-tuning etc. This page is also where you go to make your uploaded model public (“Settings” tab in the screenshot below).

Kaggle-Keras

A delightful touch here from the Kaggle team is the usability rating. It helps you see which model details are missing for your model to be seen and appreciated by the community.


Why Keras?

You will have noticed that the model.save_to_preset() and keras_nlp.upload_preset() calls are built right into Keras to make uploading convenient. While it is possible to upload models in any format, provided you publish instructions for loading them, Kaggle has chosen Keras as the preferred model format because of the consistent user experience provided by Keras for pre-trained models:

  • all models load through the same .from_preset() API
  • models have a familiar Keras API once loaded (model.fit() for fine-tuning, etc)
  • model source code is always in kerasCV or kerasNLP
  • clean and readable Keras implementations
  • and with Keras 3, all models work on JAX, TensorFlow and PyTorch

But it is always important to check our assumptions. The launch of the Gemma models, the latest open Large Language Models (LLM) from Google, was a great test. It was released in no less than 9 formats (!) and the Kaggle download info are in: there are more than 2x more downloads in keras than in all other formats combined.

Gemma downloads on Kaggle since launch

Gemma downloads on Kaggle, by framework. Note 1: the keras version runs on JAX, PyTorch, and Tensorflow. The “pytorch” count is for a pure pytorch version. Note 2: The “transformers” count represents downloads of the HF transformers version of Gemma from Kaggle, not from Hugging Face.

Kaggle has run two Gemma competitions and the KerasNLP starter notebooks for them have been copied more than 950 times between them (our internal benchmark is: >500 copies= “wow, Kagglers found it useful”, >1000 copies=”amazing”):

  • Kaggle QA with Gemma - KerasNLP Starter
  • Prompt Recovery with Gemma - KerasNLP Starter

And Kagglers are doing some pretty amazing things with these models. Here is a selection:

  • Google AI Assistant for Data Science teaching
  • Advanced RAG with Gemma, Weaviate, and LlamaIndex
  • [Gemma] I am replacing myself with an LLM | Kaggle
  • Cheating Data Science Questions with Wikipedia RAG | Kaggle
  • Exploratory Data Analysis (EDA) using Gemma LLM | Kaggle

Providing a notebook used to be the only way on Kaggle to share a modified model with the community. But with fine-tuning taking hours, it is much better to be able to share the final result directly, as a Kaggle Model.


But wait, there’s more: Hugging Face!

With this release, KerasNLP also becomes a first-class citizen on Hugging Face. Keras models can be loaded directly from Hugging Face using KerasNLP, which is now one of the supported pre-trained model libraries on Hugging Face, in the same way as Transformers or Diffusers. Here is the Gemma Keras page on Hugging Face for example. You can load the model with:

# Load the model gemma = keras_nlp.models.CausalLM.from_preset("hf://google/gemma-7b-instruct-keras")

And here is how you upload your model to Hugging Face. The only thing that changes compared to Kaggle uploads is the “hf://” instead of “kaggle://” in the URL.

# Fine-tune model # ... # Then save it as a KerasNLP preset. gemma.save_to_preset('./gemma-pirate-instruct-7b) # Upload the preset to Hugging Face Hub hf_uri = "hf://my_hf_username/gemma-pirate-instruct-7b" keras_nlp.upload_preset(hf_uri, './gemma-pirate-instruct-7b)

Here is the result, a pre-populated model card on Hugging Face. You can add more information there and make your model public through the “settings” tab.:

Hugging-Face-Keras

Notice how the model was automatically recognized on Hugging Face and auto-tagged with “KerasNLP” and “Text Generation”.


Your turn to play on Kaggle or Hugging Face

It’s your turn to play. Happy model uploading!

To try out the model upload code directly, try the Gemma 7B pirate upload to Kaggle and Hugging Face.

Find the official documentation for this feature here.

Selengkapnya
Sumber Google Developers Blog
Google Developers Blog