One of the most frequent questions one faces while running LLMs locally is: I have xx RAM and yy GPU, Can I run zz LLM model ? I have vibe coded a simple application to help you with just that.
Comments URL: https://news.ycombinator.com/item?id=43304436
Points: 21
# Comments: 26
Creato
1mo
|
9 mar 2025, 00:40:09
Accedi per aggiungere un commento
Altri post in questo gruppo
Article URL: https://nvd.nist.gov/vuln/detail/CVE-2025-32433

Article URL: https://sharpletters.net/2025/04/16/hdr-emoji/