Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Friday, 13 September 2024
Show HN: Search San Francisco using natural language https://bit.ly/3TqDMc5
Show HN: Search San Francisco using natural language Hey HN! We're Alex and Szymon from Bluesight ( https://bit.ly/3TQumqH ), where we're developing a foundation model for satellite data. We've created a demo to showcase the current capabilities of state-of-the-art models and identify areas for improvement. Our demo allows you to search for objects in San Francisco using natural language. You can look for things like Tesla cars, dry patches, boats, and more. Key features: - Search using text or by selecting an object from the image as a source ("aim" icon) - Toggle between object search (default) and tile search ("big" toggle, useful when contextual information matters, like tennis courts) - Adjust results with downvotes (useful when results are water images) - Click on tiles to locate them on a map - Control the number of retrieved tiles with a slider We use OpenAI's CLIP model ( https://bit.ly/3ZnyIZL ) to put texts and images into the same embedding space. We do a similarity search within this space using text query or source image. We are using CLIP finetuned on pairs of satellite images and OpenStreetMap ( https://bit.ly/3TsY8Bg ) tags ( https://bit.ly/3ZipzSh ) because vanilla clip performs poorly on satellite data. We pre-segment objects using Meta's Segment Anything Model ( https://bit.ly/3ZqkMy6 ) and pre-compute CLIP embeddings for each object. We'd love to hear your thoughts! What worked well for you? Where did it fail? What features do you wish it had? Any real-world problems you think this could help with? https://bit.ly/3z5wCmM September 12, 2024 at 06:06PM
Labels:
Hacker News
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment