Phind 2: AI search with visual answers and multi-step reasoning

Hi HN! Michael here. We've spent the last 6 months rebuilding Phind. We asked ourselves what types of answers we would ideally like and crafted a new UI and model series to help get us there. Our new 70B is completely different from the one we launched a year ago.

The new Phind goes beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers more meaningful:

- "explain photosynthesis" - #t=7" rel="nofollow">

#t=7

- "how to cook the perfect steak" - #t=55" rel="nofollow">

#t=55

- "quicksort in rust" - #t=105" rel="nofollow">

#t=105

Phind is also now able to seek out information on its own. If it needs more, it will do multiple rounds of additional searches to get you a more comprehensive answer:

- "top 10 Thai restaurants in SF, their prices, and key dishes" - #t=11" rel="nofollow">

#t=11

It can also perform calculations, visualize their results, and verify them in a Jupyter notebook:

- "simulate 100 coin flips and make graphs" - #t=8" rel="nofollow">

#t=8

- "train a perceptron neural network using Jupyter" - #t=45" rel="nofollow">

#t=45

This blog post contains an overview of what we did as well as technical deep dives into how we built the new frontend and models.

I'm super grateful for all of the feedback we've gotten from this community and can't wait to hear your thoughts!


Comments URL: https://news.ycombinator.com/item?id=43039308

Points: 123

# Comments: 53

https://www.phind.com/blog/phind-2

Creato 4h | 13 feb 2025, 20:30:23


Accedi per aggiungere un commento