Dagmawi Babi
6.43K subscribers
14.8K photos
1.96K videos
231 files
2.06K links
Believer of Christ | Creative Developer.

Files Channel: https://t.me/+OZ9Ul_rSBAQ0MjNk

Community: @DagmawiBabiChat
Download Telegram
Huge day indeed for AI and LLMs, congrats to Meta. This is now the most capable LLM available directly as weights to anyone! 👏

Pretrained and fine-tuned models are available with 7B, 13B and 70B parameters.

Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations.

Site
https://ai.meta.com/llama/

Demo
https://huggingface.co/blog/llama2#demo

Meta has been stepping up this year and they're not even focused on AI as much! 🔥

#AI #Meta #ML #LLAMA
@Dagmawi_Babi
Dagmawi Babi
Huge day indeed for AI and LLMs, congrats to Meta. This is now the most capable LLM available directly as weights to anyone! 👏 Pretrained and fine-tuned models are available with 7B, 13B and 70B parameters. Llama 2 pretrained models are trained on 2 trillion…
Downloaded LLAMA2 and set up the 13B param one locally and it's IMPRESSIVE!!!

First off it's like GPT-3 kind of good for everyday use. Second off it didn't require hugee resources and it was actually fast! 🤯

So the first thought that came to mind was to set it all up on my server for all y'all to use. But ofc shared servers have limited resources and would run at all.

I'll experiment and see if I can adjust the memory requirements to run on my servers. Then we can all use it freely.

You should try it, specially if you have a nice gaming PC. And you should try it with the c++ version instead of python which is also super easy to set up and run
https://github.com/ggerganov/llama.cpp

Use this guide to set up and run locally
https://dev.to/timesurgelabs/how-to-run-llama-2-on-anything-3o5m

Have fun! 🎉

#AI #Meta #ML #LLAMA
@Dagmawi_Babi
Zuck winning in life by releasing the first ever open sourced frontier AI model, beating GPT-4o and others across benchmarks, LLAMA 3.1 405B.

#LLAMA #AIML
@Dagmawi_Babi
Well this's very impressive!

Cerebras Systems Inference is capable of serving LLAMA 3.1 70B at 450 tokens/sec and LLAMA 3.1 8B at 1,850 tokens/sec. I don't even know how this's possible tbh.

Try and see how fast it is
inference.cerebras.ai

#CerebrasSystems #LLAMA #LLM #AIML
@Dagmawi_Babi
This media is not supported in your browser
VIEW IN TELEGRAM
Llama 70B model running at 3,200 Tokens/sec 🔥

#LLAMA #AIML
@Dagmawi_Babi