Dagmawi Babi
6.43K subscribers
14.8K photos
1.96K videos
231 files
2.06K links
Believer of Christ | Creative Developer.

Files Channel: https://t.me/+OZ9Ul_rSBAQ0MjNk

Community: @DagmawiBabiChat
Download Telegram
Chain of Affection
Informally known as the "Fuck Graph"

Researchers studied a highschool called Jeffersons, and they studied 800 students' sexual interactions with eachother in the span of 18 months. And this was the network. 💀

Legend
• Blue dots are male
• Pink dots are female
• The lines show a relationship
• The numbers show the amount of times they had relations

JUST CHECK OUT THE GRAPH ITS HILARIOUSLY INTERESTING. 😅

I mean just look at that monstrous graph at the top left. What kind of highschool is this?!! 🤯

#DataViz #Papers
@Dagmawi_Babi
Please open Telegram to view this post
VIEW IN TELEGRAM
Chains of Affection.pdf
3.7 MB
Chains of Affection: The Structure of Adolescent Romantic and Sexual Networks

#Papers
@Dagmawi_Babi
Here is the research paper
https://t.me/c/1156511084/813

Let's not even talk about how they trained it without human supervision on unlabelled videos. AND it scaled well as they increased GPUs which most LLMs don't.

#Google #Genie #AI #Papers
@Dagmawi_Babi
Just finished reading the paper, Attention is All You Need, that introduced transformer neural network models that gave rise to all the LLMs & gen AI we know of today.
https://t.me/c/1156511084/838

Astounding how clever some people are. Took me a bit of a while to really understand but in it's most basic form; Long ago neural networks and ML training was held linearly and consequently. Thanks to transformers ML algorithm can be trained in parallel. 🤯

It's basically like multi-threading but for neural nets.

#Papers #AttentionIsAllYouNeed
@Dagmawi_Babi
One nice teacher shared a ton (30+) of research papers across so many topics with me.

Enjoy
https://t.me/c/1156511084/904

#Papers
@Dagmawi_Babi
Dagmawi Babi
Why Do Telegram Channels Fail?.pdf
"The thing is most creatives consider their portfolio to be a reflection of themselves" is completely true when it comes to me. ❤️

"There's too much interesting stuff in the world, I'd be selfish to keep the ones I run into to myself." 💯

#Papers
@Dagmawi_Babi
Cores that don't count.pdf
131.8 KB
Here's a very interesting paper I read this week

What happens if your CPU gets something wrong? If it wakes up one day and decides 2+2=5?

Well, most of us will never have to worry about that. But if you work at a company the size of Google, you do, which is why this paper on "mercurial cores" is so fascinating.

What the authors report--and supposedly this is common knowledge at the hyperscalers--is that a couple cores per several thousand machines are "mercurial." Due to subtle manufacturing defects or old age, they give wrong answers for certain instructions. These can cause all sorts of impossible-to-diagnose issues. Some rare problems at Google that were traced back to bad CPUs include:
• Mutexes not working, causing application crashes
• Silent data corruption
• Garbage collectors targeting live memory, causing application crashes
• Kernel state corruption causing kernel panics

What makes CPUs go bad? It's very hard to tell. The authors posit that issues are becoming more frequent as CPUs get more complex, but there aren't solid numbers behind that. There are certainly strong relationships between frequency, temperature, voltage, and bad CPU behavior--most mercurial CPUs only cause problems under very specific conditions, but those conditions vary from CPU to CPU. Age is another source of problems, as older CPUs are more likely to exhibit problems.

Bad CPUs are an especially serious problem because they're very hard to detect. If cosmic rays flip bits in storage or on the network, that can be detected through error coding. But there's no analogy for a CPU that allows cheap online verification of its correctness. Instead, the best detection techniques involve monitoring for symptoms. If a core exhibits exceptionally high rates of process crashes or kernel panics relative to its fellows, that's a strong indication something is wrong with it. For the most critical applications, the authors propose triple modular redundancy--redoing each of its computations on three cores and majority-voting a reliable result.

#Papers #CoresThatDontCount
@Dagmawi_Babi
Audio
First up is @DaveDumps' paper titled "Why do Telegram Channels fail" and my oh my it's SO GOOD!! It is an absolute resource. Like worth sharing to people wanting to create telegram channels and more.
Audio
Then a really interesting research paper titled "Cores that don't count"
Audio
Then used Sam Altman's "How to be successful" blog post so I could listen to it.
Audio
Doubled down on Sam Altman and wanted to see how the podcast interpreted his latest blog post "The Intelligence Age" which should give them a fair set of fantasy and scifi to expand on.

#NotebookLM #AIML #Gemini
#DeepDive #Podcasts #AIPodcast
#Papers #Blogs #Resources
@Dagmawi_Babi
I generated a whole ton of deep dive episodes across a variety of topics and resources. But I won't spam this main channel with the files so I've uploaded all of them to my files repository channel where you can join and check out.

Here're the episodes...

Andrej Karpathy's Blogs
Software 2.0Podcast
A Survival Guide into PhDPodcast
A Cognitive DiscontinuityPodcast
A from scratch tour of BitcoinPodcast

Research Papers
Attention is all you needPodcast
Chains of AffectionPodcast
The Oldest Open Problem in MathPodcast

C.S. Lewis Books
Mere ChristianityPodcast
The Screwtape LettersPodcast

Other Books
Measurement by Paul LockhartPodcast

Enjoy these resources and episodes and if you've created some share with our community! 💛

#NotebookLM #AIML #Gemini
#DeepDive #Podcasts #AIPodcast
#Papers #Blogs #Resources
@Dagmawi_Babi
Please open Telegram to view this post
VIEW IN TELEGRAM
100 LLM Papers to explore (1).zip
166.1 MB
100 LLM Papers to explore

This curated collection comprises 100 papers that delve into the world of Large Language Models (LLMs). If you're an enthusiast, researcher, or simply someone looking to explore developments in the field of language models, this dataset is a treasure trove.

The papers in this dataset cover a wide range of topics within the LLM domain, from the foundational Transformer architectures to advanced techniques in model compression, activation functions, pruning, quantization, normalization, sparsity, fine-tuning, sampling, scaling, mixture of experts, watermarking, and much more.

#Papers #LLMs
@Dagmawi_Babi
Highly accurate protein structure prediction with AlphaFold.pdf
3.5 MB
Highly accurate protein structure prediction with AlphaFold. AlphaFold is obviously very impressive.

#Papers
@Dagmawi_Babi
Turns out people tip more to women who are young, got big boobs, whose hair is dyed (or blonde) and are thinner.

And if you're older, smaller boobs, hair is normal (or not blonde) and if you're fatter you get tipped less.

No surprise that attractiveness does play a big role in the waitressing industry. Found it from this research paper done in 2008.

#Papers
@Dagmawi_Babi