Show HN: GibRAM an in-memory ephemeral GraphRAG runtime for retrieval

60 points by ktyptorio a month ago on hackernews | 10 comments

VoidWhisperer | a month ago

Out of curiosity, did you settle on that name before or after the RAM availability/price issues?

mirekrusin | a month ago

GrrHDD

[OP] ktyptorio | a month ago

Actually, the name definitely came after noticing RAM prices. Though the idea where the graph-in-memory only for ephemeral RAG sessions came first, we won't pretend the naming wasn't influenced by RAM being in the spotlight.

zwaps | a month ago

Very cool, kudos

Where might one see more about what type of indexing you do to get the graph?

threecheese | a month ago

[OP] ktyptorio | a month ago

Exactly, thank you. Still in LLM-based extraction.

ekianjo | a month ago

how do you search the graph network?

[OP] ktyptorio | a month ago

There are two steps:

Vector search (HNSW): Find top-k similar entities/text units from the query embedding

Graph traversal (BFS): From those seed entities, traverse relationships (up to 2 hops by default) to find connected entities

This catches both semantically similar entities AND structurally related ones that might not match the query text.

Implementation: https://github.com/gibram-io/gibram/blob/main/pkg/engine/eng...

kordlessagain | a month ago

This is how I did it a few years back while working for a set store company. It works well.

nirdiamant | a month ago

The separate graph and vector storage can indeed add overhead for short-lived tasks. I've found that using a dual-memory architecture, where episodic and semantic memories coexist, can streamline this process and reduce complexity. If you're interested in seeing how this could work, I put together some tutorials on similar setups: https://github.com/NirDiamant/agents-towards-production