What's holding me back from AI repos and agents isn't running it locally though. Its the lack of granular control. I'm not even sure what I want. I certainly don't want to approve every request, but the idea of large amounts of personal data being accessible, unchecked, to an AI is concerning.
I think perhaps an agent that focuses just on security, that learns about your personal preferences, is what might be needed.
Agreed regarding the privacy/security hesitations. Running the models locally with ollama is an option, but of course there's the hardware requirements and limitations of open source models to contend with. ultimately it's a balance between privacy and ease of use, and I'm not sure that there's a good one-size-fits-all for that balance.
is your idea of granular control (roughly) a group of agents in separate containers writing back to their own designated store each sufficient, or more control than that?
The honest answer is it doesn't help a ton, at least not in its current form. It's fun to look at, and occasionally I'll see some interesting semantic connections between articles - but by far the more useful tools here are the wiki generation, auto-tagging, and chat/MCP features. The graph view definitely needs more love - if anyone has thoughts on how to make it more useful, I'd love to hear them.
Not 100% sure what are the ingestions methods available ?
Browser extension clipper and RSS are two. I guess I can manually create a node/atom ? Can it scan a local folder for markdown notes ? Or ocr some pdf -> markdown/frontmatter sidecar files -> atomic node ? That would be the dream.
Yes, regarding ingestion methods the following are available:
- RSS feed
- Web clipper browser extension (working on publishing the extension to the chrome store)
- Import a folder of markdown files (desktop app only)
- Manually ingest a one-off URL
- Manually create an atom
The iOS app is a more recent addition, also not yet published to the app store, but the idea is to add a share target so that during mobile browsing you can quickly add articles to your KB.
PDF support would be great, and I'd love to hear more ideas about ingestion methods if anyone has them!
Thanks! The project is still early stages, haven't had a chance to get the app signing set up - right now the easiest way to get started is using the web interface via docker compose.
For sure. The idea here (or at least how I've been using it) is to use Atomic as catchall place to put personal notes, interesting articles, research ideas .. pretty much anything, and Atomic will handle the categorization and knowledge synthesis. For example, I have a knowledge base that uses RSS to sync top Hacker News articles and I'll occasionally generate new wiki-style articles which summarize and synthesize the articles based on top-level categories (AI, hardware, philosophy, you name it).
I think tools like this will get really popular once more non-technical users get comfortable with CLI-based agentic tools. What's your go-to agent harness when using this? Will check it out!
Clean approach to connecting knowledge semantically. The self-hosted angle is smart — data ownership matters especially for personal knowledge. How are you handling the semantic matching under the hood?
What's holding me back from AI repos and agents isn't running it locally though. Its the lack of granular control. I'm not even sure what I want. I certainly don't want to approve every request, but the idea of large amounts of personal data being accessible, unchecked, to an AI is concerning.
I think perhaps an agent that focuses just on security, that learns about your personal preferences, is what might be needed.
Agreed regarding the privacy/security hesitations. Running the models locally with ollama is an option, but of course there's the hardware requirements and limitations of open source models to contend with. ultimately it's a balance between privacy and ease of use, and I'm not sure that there's a good one-size-fits-all for that balance.
Not 100% sure what are the ingestions methods available ? Browser extension clipper and RSS are two. I guess I can manually create a node/atom ? Can it scan a local folder for markdown notes ? Or ocr some pdf -> markdown/frontmatter sidecar files -> atomic node ? That would be the dream.
Yes, regarding ingestion methods the following are available:
- RSS feed
- Web clipper browser extension (working on publishing the extension to the chrome store)
- Import a folder of markdown files (desktop app only)
- Manually ingest a one-off URL
- Manually create an atom
The iOS app is a more recent addition, also not yet published to the app store, but the idea is to add a share target so that during mobile browsing you can quickly add articles to your KB.
PDF support would be great, and I'd love to hear more ideas about ingestion methods if anyone has them!
I saw sqlite-vec for semantic search so I assume notes are stored in sqlite.
- What considerations did you have for the storage layer?
- Also does storage on disk increase linearly as notes/atoms grow?