as of now, no threshold but that is planned in the future.
for example, for now if i search "cybertruck" in my indexed dashcam footage, i don't have any cybertrucks in my footage, so it'll return a clip of the next best match which is a big truck, but not a cybertruck
this function will be a must-have for all home security systems. I used to spend hours going through home security cameras to check if our cat went out the house when the door was accidentally left open (turned out it was just really good at hiding within the house).
Most home monitoring only records when there is movement though? So that already compresses the search space a lot. And just zipping forward and back it's pretty easy to quickly find the 30 seconds where there is a figure wallking up to your front door.
gemini embedding 2 converts straight video to vectors. in this case, dashcam clips don't have audio to transcribe and even if they did, it would be useless in the search
dashcam and home security footage are the 2 main ones i can think of.
a bit expensive right now so it's not as practical at scale. but once the embedding model comes out of public preview, and we hopefully get a local equivalent, this will be a lot more practical.
I think a good use case would be searching for certain products or videos across social media (TikTok and Instagram). especially useful for shopping, maybe
The Matrix style human pods: we live in blissful ignorance in the Matrix, while the LLMs extract more and more compute power from us so some CEO somewhere can claim they have now replaced all humans with machines in their business.
I was thinking more of the season 3 episode of Doctor Who titled Gridlock where everyone lives in flying cars circling a giant expressway underground, while all the upper class people on the surface died years ago from a pandemic.
Yes? Right now it is relatively expensive to search video. As embedding tech like this advances and makes it even cheaper it just increases the ability to search and analyze every movement. “Locate speech patterns that indicate dissident activity using the dissident activity skill”
Not aware of any that do native video-to-vector embedding the way Gemini Embedding 2 does. There are CLIP-based models (like VideoCLIP) that embed frames individually, but they don't process temporal video. you'd need to average frame embeddings which loses a lot.
Would love to see open-weight models with this capability since it would eliminate the API cost and the privacy concern of uploading footage.
Yes to both. The embedding is over raw video frames, so anything visible (text, signs, captions) gets captured in the vector. And Gemini Embedding 2 extracts the audio track and embeds it alongside the visual frames. So a query like 'someone yelling' would theoretically match on audio. My dashcam footage doesn't have audio though, so I haven't tested that side yet.
While the vector store is local, it is sending the data to Gemini's API for embedding. (Which if using a paid API key is probably fine for most use cases, no long term retention/training etc.)
Very impressive! A webhook could be configured to trigger an alarm if a semantic match to any category of activities is detected, and then you basically have a virtual security guard and private investigator. Well played.
Thanks! Yeah that would be pretty cool, but continuous indexing would be pretty expensive now, because the model's in public preview and there are no local alternatives afaik.
This very well might be a reality in a couple years though!
This is a really cool implementation—embeddings still often feel like magic to me. That said, this exact use case is sort of also my biggest point of concern with where AI takes us, much more so than most of the common AI risks you hear lots of chatter about. We live in a world absolutely loaded with cameras now but ultimately retain some semblance of semi-anonymity/privacy in public by virtue of the fact that nobody can actually watch or review all of the video from those cameras except when there is a compelling reason to do so, but these technologies are making that a much more realistic proposition.
The presence of cameras everywhere is considerably more concerning than the status quo, to me at least, when there is an AI watching and indexing every second of every feed—where camera owners or manufacturers or governments could set simple natural language parameters for highly specific people or activities notify about. There are obviously compelling and easy-to-sell cases here that will surely drive adoption as it becomes cost effective: get an alert to crime in progress, get an alert when a neighbor who doesn't clean up after his dog, get an alert when someone has fallen...but the potential implications of living in a panopticon like this if not well regulated are pretty ugly.
Totally valid concern. Right now the cost ($2.50/hr) and latency make continuous real-time indexing impractical, but that won't always be the case. This is one of the reasons I'd want to see open-weight local models for this, keeps the indexing on your own hardware with no footage leaving your machine. But you're right that the broader trajectory here is worth thinking carefully about.
It's 2.50 an hour because Google has margins. A nation state could do it at cost, and even if it's not a huge difference, the price of a year's worth of embeddings is just $21,900. That's a rounding error, especially considering it's a one time cost for footage.
Right? $2.50 an hour is trivial to a Government that can vote to invent a trillion dollars. Even just 1 million dollars is the cost of monitoring 45 real time feeds for a year. I'm sure just many very rich people would pay that for the safety of their compound.
From what I see the code downsamples video to 5 fps, so 1 hour of video is 3600 seconds * 5 fps = 18,000 frames. 18,000 frames * $0.00079/frame = $14.22. A couple dollars more with the overlap.
(The code also tries to skip "still" frames, but if your video is dynamic you're looking at the cost above.)
you're right that the code uses ffmpeg to downsample the chunks to 5fps before sending them, but that's only a local/bandwidth optimization, not what the api actually processes.
regardless of the file's frame rate, the gemini api natively extracts and tokenizes exactly 1 fps. the 5 fps downscaling just keeps the payload sizes small so the api requests are fast and don't timeout.
i'll update the readme to make this more clear. thanks for bringing this up.
It's being built as we speak. I attended at a city council meeting yesterday, discussing approving a contract for ALPR cameras. I learned about a product from the camera vendor called Fusus[0], a dashboard that integrates various camera systems, ALPRs, alerts, etc. Two things stood out to me: natural-language querying of video feeds, and future planned integration with civilian-deployed cameras. The city only had budget for 50 ALPRs, and they stressed how they're only deploying them on main streets, but it seems like only a matter of time before your neighbor is able to install a camera that feeds right into the local PD's AI-enabled systems. One council member raised concerns about integrations with the citizen app[1] specifically (and a few others I didn't catch the names of). I'm very worried about where all this is heading.
I live in Oxford, UK and walked past a police van that said "automatic facial recognition in use". Not exactly a good sign without any caveats. I imagine they recorded me staring at their van.
Most cameras are also not queryable by any one person or organization. They are owned by different companies and if the government wants access they have to subpoena them after the fact.
The problems start cropping up when you get things like Flock where governments start deploying cameras on a massive scale, or Ring where a single company has unrestricted access to everyone's private cameras.
I think Flock is just a symptom of the underlying tech becoming so cheap that "just blanket the city in cameras" starts to sound like a viable solution when police rely so heavily on camera footage.
I don't think it's a good thing but it seems the limiting factor has been technological feasibility instead of any kind of principle against it.
For specific people they probably wouldn’t use general embeddings. These embeddings can let you search for “tall man in a trenchcoat” but if you want a specific person you would use facial recognition.
I think a general description is better for surveillance/tracking like this, no? If they're at a weird angle or intentionally concealing their face then facial recognition falls apart but being able to describe them naturally would result in better tracking IMO.
Presumably the ideal is some kind of a fusion. Upload or tag some images/videos and link someone's social profiles and the system can look out for them based on facial recognition, gait recognition, vehicle/pets/common wardrobe items in combination.
All the major cloud providers offer some form of face detection and numberplate reading, with many supporting object detection (ie package, vehicle, person) out of the camera itself.
It's definitely creeping into things, though most of the features I've seen are fairly simplistic compared to what would be possible if the video was being reviewed + indexed by current SoTA multimodal LLMs.
Once the hardware to run inference for something like the vision understanding module of this can be run on a low / medium power asic drones are going to be absolutely horrifying weapons.
> this exact use case is sort of also my biggest point of concern with where AI takes us, much more so than most of the common AI risks you hear lots of chatter about.
I've been hearing warnings that AI would be used for this since well before it seemed feasible.
Could this be used for creating video editing software?
Imagine a Premiere plugin where you could say "remove all scenes containing cats" and it'll spit out an EDL (Edit Decision List) that you can still manually adjust.
Yeah, this is a great idea, I’ve actually been thinking about exactly this as the next logical step.
SentrySearch already returns precise in/out timestamps for any natural-language query and uses ffmpeg to auto-trim clips. Turning that into an EDL (or even a direct Premiere plugin that exports an editable cut list) feels natural.
I’m not a Premiere expert myself, but I’d love to see this happen. If you (or anyone) wants to sketch out a quick EDL exporter or plugin, I’ll happily review + merge a PR and help wherever I can. Just drop a GitHub issue if you start something!
I picked up a Rexing dash cam a few months back and after getting frustrated with how clunky it is to get footage of it, I decided to look into building something out myself to browse and download the recordings without having to pull the SD card. While scrolling through the recordings, I explicitly remember thinking it would be nice to just describe what I was looking for and run a search. Looking forward to incorporating this into my project.
I've found I have to be very specific to get the clip I'm searching for. For example, "car cuts me off" just returned a clip of a car driving past my blindspot. But, "car with bike rack on back cuts me off at night" gave me exactly the clip I was looking for.
I wonder if the underlying improvements in visual language learning will allow for even more efficient search. The First Fully General Computer Action Model -> https://si.inc/posts/fdm1/
I don't quite understand the 5 second overlap.
I assume it's so that events that occur over the chunk boundary don't get missed, but is there any examples or benchmarking to examine how useful this is?
yea, it's so events on a chunk boundary still get captured in at least one chunk. i haven't had the chance to do formal benchmarks on overlap vs. no-overlap yet. the 5s default is a pragmatic choice, long enough to catch most events that would otherwise be split, short enough to not add much cost (120 chunks/hr to ~138). also it's configurable via the --overlap flag.
dashcam is just one of the use cases and the one i tested on. but this could theoretically work with any kind of video footage like home security footage
Damn, I need to going with my embeddings project. I've currently got a prototype for using embeddings (not gemini in my case) for making a game that's kinda reverse connections:
Multimodal AI will lead to an interesting arms race in ad detection vs ad insertion. I played around with AI ad removal with older Gemini models, but it seems like this would be even more powerful to instantly identify ads (and potentially mute or strip them out).
Nice article. I saw someone depicting the future of web search with AI. The conclusion was not the bright future. Simply put, ads will never go away. Either AI providers will get paid for whitelisting ads, or even worse these AI will directly promote advertised products.
People could collectively decide to start paying for stuff and most of our gripes could at least switch to providers not accommodating their customers.
To which I'd say to the advertiser, "Good luck paying off the AI adblocker running in my closet at home."
Then again, let's not be too hasty here. Let's see what you're willing to offer. I can sell you the eyeballs of the AI ad-watcher running in my closet for $10/impression. Or, for $1000/impression, you can bring your message to the attention of myself, an actual human. A bargain at any price!
The README on the GitHub has a section on this[0]:
>Indexing 1 hour of footage costs ~$2.84 with Gemini's embedding API (default settings: 30s chunks, 5s overlap):
>1 hour = 3,600 seconds of video = 3,600 frames processed by the model. 3,600 frames × $0.00079 = ~$2.84/hr
>The Gemini API natively extracts and tokenizes exactly 1 frame per second from uploaded video, regardless of the file's actual frame rate. The preprocessing step (which downscales chunks to 480p at 5fps via ffmpeg) is a local/bandwidth optimization — it keeps payload sizes small so API requests are fast and don't timeout — but does not change the number of frames the API processes.
In the demo bro shows how to search for "a car with a bike rack on the back that cut me off at night." Given the grudge he must've held from being cut off, I strongly suspect that finding this specific car was his main motivation for building the project in the first place
ygouzerh | a day ago
stavros | a day ago
dev_tools_lab | a day ago
[OP] sohamrj | a day ago
for example, for now if i search "cybertruck" in my indexed dashcam footage, i don't have any cybertrucks in my footage, so it'll return a clip of the next best match which is a big truck, but not a cybertruck
dev_tools_lab | 9 hours ago
mdrzn | a day ago
fhe | 18 hours ago
SoftTalker | 13 hours ago
klntsky | a day ago
[OP] sohamrj | a day ago
password4321 | a day ago
Aeroi | a day ago
[OP] sohamrj | a day ago
a bit expensive right now so it's not as practical at scale. but once the embedding model comes out of public preview, and we hopefully get a local equivalent, this will be a lot more practical.
hebelehubele | a day ago
wahnfrieden | a day ago
giozaarour | a day ago
vidarh | a day ago
mannyv | 19 hours ago
iso1631 | 6 hours ago
CamperBob2 | 14 hours ago
emsign | a day ago
BrokenCogs | a day ago
throwup238 | a day ago
ting0 | a day ago
RobotToaster | a day ago
greesil | 4 hours ago
jama211 | a day ago
zwirbl | a day ago
52-6F-62 | a day ago
draw_down | a day ago
bitexploder | 20 hours ago
nclin_ | a day ago
anxoo | 20 hours ago
sbinnee | 11 hours ago
moomoo11 | 11 hours ago
7777777phil | a day ago
Cool Project, thanks for sharing!
kamranjon | a day ago
[OP] sohamrj | a day ago
Would love to see open-weight models with this capability since it would eliminate the API cost and the privacy concern of uploading footage.
CamperBob2 | 14 hours ago
jakejmnz | an hour ago
SpaceManNabs | a day ago
If there is text on the video (like a caption or wtv), will the embedding capture that? Never thought about this before.
If the video has audio, does the embedding capture that too?
[OP] sohamrj | a day ago
nullbyte | a day ago
apwheele | a day ago
simonreiff | a day ago
[OP] sohamrj | a day ago
This very well might be a reality in a couple years though!
CamperBob2 | 14 hours ago
macNchz | a day ago
The presence of cameras everywhere is considerably more concerning than the status quo, to me at least, when there is an AI watching and indexing every second of every feed—where camera owners or manufacturers or governments could set simple natural language parameters for highly specific people or activities notify about. There are obviously compelling and easy-to-sell cases here that will surely drive adoption as it becomes cost effective: get an alert to crime in progress, get an alert when a neighbor who doesn't clean up after his dog, get an alert when someone has fallen...but the potential implications of living in a panopticon like this if not well regulated are pretty ugly.
[OP] sohamrj | a day ago
mpalmer | 23 hours ago
wholinator2 | 23 hours ago
jimmySixDOF | 20 hours ago
https://ai.google.dev/gemini-api/docs/pricing#gemini-embeddi...
jjwiseman | 19 hours ago
(The code also tries to skip "still" frames, but if your video is dynamic you're looking at the cost above.)
[OP] sohamrj | 17 hours ago
regardless of the file's frame rate, the gemini api natively extracts and tokenizes exactly 1 fps. the 5 fps downscaling just keeps the payload sizes small so the api requests are fast and don't timeout.
i'll update the readme to make this more clear. thanks for bringing this up.
jjwiseman | 14 hours ago
citruscomputing | 23 hours ago
[0]: https://www.axon.com/products/axon-fusus [1]: https://citizen.com/
robertlagrant | 5 hours ago
cake_robot | 23 hours ago
whattheheckheck | 19 hours ago
Ajedi32 | 23 hours ago
The problems start cropping up when you get things like Flock where governments start deploying cameras on a massive scale, or Ring where a single company has unrestricted access to everyone's private cameras.
Spivak | 23 hours ago
I don't think it's a good thing but it seems the limiting factor has been technological feasibility instead of any kind of principle against it.
janalsncm | 23 hours ago
hypeatei | 23 hours ago
macNchz | 20 hours ago
greggsy | 22 hours ago
macNchz | 20 hours ago
FuckButtons | 19 hours ago
mbokinala | 9 hours ago
zahlman | 18 hours ago
I've been hearing warnings that AI would be used for this since well before it seemed feasible.
macNchz | 16 hours ago
danbrooks | a day ago
cloogshicer | 23 hours ago
Imagine a Premiere plugin where you could say "remove all scenes containing cats" and it'll spit out an EDL (Edit Decision List) that you can still manually adjust.
[OP] sohamrj | 22 hours ago
SentrySearch already returns precise in/out timestamps for any natural-language query and uses ffmpeg to auto-trim clips. Turning that into an EDL (or even a direct Premiere plugin that exports an editable cut list) feels natural.
I’m not a Premiere expert myself, but I’d love to see this happen. If you (or anyone) wants to sketch out a quick EDL exporter or plugin, I’ll happily review + merge a PR and help wherever I can. Just drop a GitHub issue if you start something!
rigrassm | 23 hours ago
Thanks for sharing!
totisjosema | 23 hours ago
[OP] sohamrj | 23 hours ago
bobafett-9902 | 22 hours ago
QubridAI | 21 hours ago
WatchDog | 21 hours ago
[OP] sohamrj | 20 hours ago
thegabriele | 21 hours ago
[OP] sohamrj | 20 hours ago
lwarfield | 20 hours ago
collections.lwarfield.dev
cat-turner | 19 hours ago
rao-v | 17 hours ago
crashabr | 16 hours ago
sans_souse | 16 hours ago
[OP] sohamrj | 16 hours ago
npilk | 15 hours ago
https://notes.npilk.com/experiments-with-ai-adblock
sbinnee | 11 hours ago
WarmWash | 5 hours ago
greesil | 4 hours ago
CamperBob2 | 2 hours ago
Then again, let's not be too hasty here. Let's see what you're willing to offer. I can sell you the eyeballs of the AI ad-watcher running in my closet for $10/impression. Or, for $1000/impression, you can bring your message to the attention of myself, an actual human. A bargain at any price!
febed | 14 hours ago
addandsubtract | 2 hours ago
>Indexing 1 hour of footage costs ~$2.84 with Gemini's embedding API (default settings: 30s chunks, 5s overlap):
>1 hour = 3,600 seconds of video = 3,600 frames processed by the model. 3,600 frames × $0.00079 = ~$2.84/hr
>The Gemini API natively extracts and tokenizes exactly 1 frame per second from uploaded video, regardless of the file's actual frame rate. The preprocessing step (which downscales chunks to 480p at 5fps via ffmpeg) is a local/bandwidth optimization — it keeps payload sizes small so API requests are fast and don't timeout — but does not change the number of frames the API processes.
[0] https://github.com/ssrajadh/sentrysearch#cost
martz | 12 hours ago
novoreorx | 11 hours ago
[OP] sohamrj | 4 hours ago
subhashp | 10 hours ago
bob1029 | 10 hours ago
I believe you could use a combination of select and scene parameters in ffmpeg to do this automatically when a chunk of video is created each time.
ideashower | 59 minutes ago