One of the most frequent questions one faces while running LLMs locally is: I have xx RAM and yy GPU, Can I run zz LLM model ? I have vibe coded a simple application to help you with just that.
Comments URL: https://news.ycombinator.com/item?id=43304436
Points: 21
# Comments: 26
Utworzony
1mo
|
9 mar 2025, 00:40:09
Zaloguj się, aby dodać komentarz
Inne posty w tej grupie
Article URL: https://jerf.org/iri/post/2025/go_layered_design/
Article URL: https://www.science.org/doi/10.1126/sciadv.adu1052