July 11th, 2024, 11AM ET
[Sponsored Post] This webinar offers a information to putting in Hugging Face transformers, Meta’s Llama 3 weights, and the required dependencies for working Llama regionally on AMD methods with ROCm™ 6.0.1
Webinar Matters
- Stroll by means of a primary understanding of transformer fashions. Introduce LLM’s, Hugging Face, and a few of the current work executed between AMD and Hugging Face
- The best way to set up torch and transformers framework and its dependencies on AMD Intuition and Radeon GPUs with ROCm 6.1
- Stroll by means of a set of publicly out there scripts for working llama2-7b and serving on AMD Intuition™ MI210 and AMD Radeon™ W7800 GPUs on a system with ROCm 6.1 put in
- Share the place to seek out documentation and weblog posts from AMD
Following the dwell coding part, there can be a quick wrap-up to share ROCm assets and have a Q&A session with ROCm specialists.
Register here.
1 For a full checklist of Radeon components supported by ROCm™ software program as of 5/1/2024, go to https://rocm.docs.amd.com/en/latest/reference/gpu-arch-specs.html. GD-241