New Show Hacker News story: Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
55 by danielhanchen | 10 comments on Hacker News.
Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights: 1. Manual autograd engine - hand derived backprop steps. 2. QLoRA / LoRA 80% faster, 50% less memory. 3. All kernels written in OpenAI's Triton language. 4. 0% loss in accuracy - no approximation methods - all exact. 5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+. 6. Flash Attention support via Xformers. 7. Supports 4bit and 16bit LoRA finetuning. 8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster). 9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training! https://ift.tt/5asFgeP... has more info about Unsloth! Hopefully you can try it out! Wrote a blog post at https://ift.tt/FmnplJh if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!
55 by danielhanchen | 10 comments on Hacker News.
Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights: 1. Manual autograd engine - hand derived backprop steps. 2. QLoRA / LoRA 80% faster, 50% less memory. 3. All kernels written in OpenAI's Triton language. 4. 0% loss in accuracy - no approximation methods - all exact. 5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+. 6. Flash Attention support via Xformers. 7. Supports 4bit and 16bit LoRA finetuning. 8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster). 9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training! https://ift.tt/5asFgeP... has more info about Unsloth! Hopefully you can try it out! Wrote a blog post at https://ift.tt/FmnplJh if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!
Comments
Post a Comment