Friday, 4 August 2023

Show HN: SymbolicAI https://bit.ly/43U9mRU

Show HN: SymbolicAI The SymbolicAI project started somewhere at the end of the last year and had its first commit mid January this year. If I would to be briefly summarize "why" do we think it's a project worth working on, is because of the following idea: we're slowly marching towards software 3.0 and we need to grow frameworks to a maturity point that would allow people not only to PoC their own ideas, but also to gain access to a strong community support that nurtures the mutual exchange of ideas between the individuals. I personally believe this is the secret behind many successful OS projects (e.g. Neovim, LazyGit, PyTorch, Jax, just to name a few). FAQ Q: What the project does? A: A lot. You can build your own chatbot, interact with as many as 13 tools (search, wolfram, dall-e, blip, clip, ocr, pinecone, whisper, selenium, local files, etc.), pretty much most of the things you've already seen hyped or cool on the social media. Q: Sounds close to… LangChain…? A: Briefly, I think LangChain grew too fast and became the jack of all trades but the master of none. I'm sure they had their reasons for approaching things the way they did, and I don't want to make this post about them more than I already have. Others have had more thorough investigations of this topic and better rants than I would. Q: Ok, then tell me why would I want to be part of it? A: We're x2 core developers. Sometimes less is more, giving us time to think more deeply about designing the framework and making it accessible to others. Some principles: - Ease of use and flexibility: we were heavily inspired by PyTorch, and we aimed to follow the same code structure one uses with torch. Our original intuition was that when you're introducing something new, tying it with something people are familiar with will make it more accessible (in terms of read/write). Not only this, but the initial recipe proved quite successful and replacing it with something else without concrete reasons is not worth doing IMHO. Moreover, one of our long-term visions is to have smooth integration with torch. We aim to grant SymbolicAI differentiable features. Imagine your chatbot learning to better use its memory (e.g. how to update its memory with relevant information). - As in torch everything is a tensor, in our framework everything is a Symbol. A Symbol once defined gets accessed to some primitives (as an analogy think of PrimTorch) which would easily allow you to compose complex expressions or manipulate Symbol variables. This unlock very fast manipulations (i.e. dot notation <|object|><|dot|><|method|>). - The hard work is done by decorators. We use them for the following reasons: (1) modularity, (2) composition, (3) flexibility, and (4) readability. - We want to make a cohesive dev environment. I'm a script kiddo and I don't like to leave my terminal. I dislike web interfaces. I want to use my local env with my own setup. We have an experimental feature that is built on top of git and would enable package management. It's similar to pip, but for extensions built with our framework. Another long-term vision is to make accessible to anyone using our framework a quick share with the community. See https://bit.ly/3YBq9Z7 for a showcase of how to do transcription and create youtube chapters with Whisper with our package manager. There's much more to say, but I will stop here. Please check our GitHub README ( https://bit.ly/43Q6rtz ) for a more deep dive or our latest tutorial video that highlights some relevant use-cases from a more high-level POV ( https://www.youtube.com/watch?v=0AqB6SEvRqo ). I really do hope that at least some of you reading will get interested. We have so many goals we want to reach, so many ideas we want to test, and probably just as many bugs (we call them maggots just for fun) we need to fix. We need you. https://bit.ly/43Q6rtz August 4, 2023 at 09:51AM

No comments:

Post a Comment