Show HN: Tune LLaMa3.1 on Google Cloud TPUs

Hey HN, we wanted to share our repo where we fine-tuned Llama 3.1 on Google TPUs. We’re building AI infra to fine-tune and serve LLMs on non-NVIDIA GPUs (TPUs, Trainium, AMD GPUs).

The problem: Right now, 90% of LLM workloads run on NVIDIA GPUs, but there are equally powerful and more cost-effective alternatives out there. For example, training and serving Llama 3.1 on Google TPUs is about 30% cheaper than NVIDIA GPUs.

But developer tooling for non-NVIDIA chipsets is lacking. We felt this pain ourselves. We initially tried using PyTorch XLA to train Llama 3.1 on TPUs, but it was rough: xla integration with pytorch is clunky, missing libraries (bitsandbytes didn't work), and cryptic HuggingFace errors.

We then took a different route and translated Llama 3.1 from PyTorch to JAX. Now, it’s running smoothly on TPUs! We still have challenges ahead, there is no good LoRA library in JAX, but this feels like the right path forward.

Here's a demo (https://dub.sh/felafax-demo) of our managed solution.

Would love your thoughts on our repo and vision as we keep chugging along!


Comments URL: https://news.ycombinator.com/item?id=41512142

Points: 34

# Comments: 3

https://github.com/felafax/felafax

Created 6mo | Sep 11, 2024, 7:50:06 PM


Login to add comment

Other posts in this group

Ask HN: Where do you browse and chat outside of here?

I used to really enjoy reddit but find it toxic and too “mainstream” recently so I moved to Lemmy but it’s quiet and kept a lot of redditism. I’ve been enjoying reading HN daily but want to read m

Mar 19, 2025, 6:50:08 AM | Hacker news
Show HN: I made a worldwide sexual life dashboard

The idea is to share data-based insights about sexual life

I’ve worked in SexEd startups, and it’s wild that humanity doesn’t have this data. Most major academic studies have focused on sex prim

Mar 19, 2025, 2:20:06 AM | Hacker news