tbh I think it is very funny. Sent it to a few friends because of that. Was chased by a cursor for a minute and then did the same to him/her. Childish, but funny.
It only works well when the site is busy. The frontpage of HN makes it work very very well.
I had a discussion at work today about completely replacing the old search engine with vector embeddings because they work so well.
I think google needs to be very afraid in the coming few years because this use of AI is relatively cheap to run, simple to deploy and the models are small enough that you can build one customized based on your personal ranking of several thousand pieces of text.
1) They work really well but not in all the cases and most likely the better approach is some augment between the two. OpenSearch/Elasticsearch already include hybrid approaches.
2) The moat is having a snapshot of the web and being able to search through it efficiently.
Only indirectly. A lot of popular models used for generating vectors are nowhere near as smart as LLMs. Also, the vectors themselves are not machine learning models. They are just lists of numbers intended for comparing to other lists of numbers. Typically using some similarity algorithm like cosine similarity.
Vector search embeddings are only as good as the models you use, the content you have, and the questions you ask.
This is a bit of a pitfall when you use them for search. Especially if you have mobile users because most of them are not going to thumb in full sentences in a search box. I.e. the questions the ask are going to be a few letters or words at best and not have a lot of context. And users will still expect good results. Vector search is not great for those type of use cases because there just isn't a whole lot of semantics in these short queries. Sometimes, all you need is just a simple prefix search.
itronitron | 1 year, 7 months ago
seu | 1 year, 7 months ago
Cyuonut | 1 year, 7 months ago
barrenko | 1 year, 7 months ago
psini | 1 year, 7 months ago
pelagicAustral | 1 year, 7 months ago
onemiketwelve | 1 year, 7 months ago
notnaut | 1 year, 7 months ago
baxtr | 1 year, 7 months ago
I guess it looked kinda nice with 1-2 visitors per minute but now…
Dunedan | 1 year, 7 months ago
omegabravo | 1 year, 7 months ago
BozeWolf | 1 year, 7 months ago
It only works well when the site is busy. The frontpage of HN makes it work very very well.
No idea what the article is about though.
bambax | 1 year, 7 months ago
linsomniac | 1 year, 7 months ago
aruggirello | 1 year, 7 months ago
Semaphor | 1 year, 7 months ago
Unlike the recently frontpaged page, this one loads the annoying effects as 3rd party JS, so uMatrix automatically blocked it for me.
llm_trw | 1 year, 7 months ago
I think google needs to be very afraid in the coming few years because this use of AI is relatively cheap to run, simple to deploy and the models are small enough that you can build one customized based on your personal ranking of several thousand pieces of text.
temp20240604 | 1 year, 7 months ago
infecto | 1 year, 7 months ago
2) The moat is having a snapshot of the web and being able to search through it efficiently.
impostervt | 1 year, 7 months ago
firejake308 | 1 year, 7 months ago
jillesvangurp | 1 year, 7 months ago
Vector search embeddings are only as good as the models you use, the content you have, and the questions you ask.
This is a bit of a pitfall when you use them for search. Especially if you have mobile users because most of them are not going to thumb in full sentences in a search box. I.e. the questions the ask are going to be a few letters or words at best and not have a lot of context. And users will still expect good results. Vector search is not great for those type of use cases because there just isn't a whole lot of semantics in these short queries. Sometimes, all you need is just a simple prefix search.
sbarre | 1 year, 7 months ago
That part isn't clear to me.
billywhizz | 1 year, 7 months ago
https://blog.cloudflare.com/cloudflare-acquires-partykit
amelius | 1 year, 7 months ago