-
-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Avoid duplicate bookmarks from browser extension #49
Comments
I haven't added the browser extension yet... I went to the play store to look for it and got sidetracked looking for the android app. That said, can we broaden this to just "avoid duplicates all together?" I mean, if this thing's gonna be as smart as you say, couldn't you somehow check⸸ to see if there is the same or almost identical bookmark in Hoarder and have the system optionally ask you if you still want to add—depending on the context—an identical or largely similar url to the DB while showing the identical or largely similar DB entries? ⸸just looking for identical urls is very unlikely suffice in this... it will need to somehow be fingerprinted, e.g., the way Picard creates a fingerprint of an audio track. probably put more emphasis on the FQDN and less as you get further to the right in the URL... I dunno. aybe the AI can tell by going to the links. |
I think deduping exact URLs is a good start and I guess shouldn't be too hard. The "almost identical" one is a bit more tricky and can probably happen asynchronous. Similar to google photo's "Here's some stuff you can clean" page. |
Yeah, the catch there is if the user has added any text with the url... so that's one thing. I've added a couple words with some of the urls i've pasted in. now that i have the browser plugin, that'll probably be reduced, especially if there's not a text input for a short note (another FR if it doesn't have that)
Yeah, that's cool... i mean, depending on the AI and processing power, it would be great if it could be done at the same time, but i can see that getting "expensive" in terms of local processing power alone. But there needs to be some function to clean duplicates if this is going to be a browsable database. if it is only searchable then that's moot I suppose because duplicates will just give that "idea" higher prominence in the search...kinda the way of if I have 45 copied of "Playing in the Band," it's going to come up 10x more often than a song of which I have 4.5 copies. had to look...
|
You won't have to wait for long for this. I've already implemented it yesterday (e99dee0) and it'll be included in the next release. |
there's another way to manage that... having duplicates of something can show it's importance as well as just redundancy. going back to the music library example. I have all of Bob Dylan's studio albums as well as some of the GH albums and when I play my entire collection, I add Dylan twice. |
What I would like to see is this:
Additionally it would be nice if there would be a trash can icon on the browser pop-up that can be used to delete the bookmark. The same way you can press the star in the browser to create a bookmark and press it again to delete it. |
This is going to be available in the next release.
|
How about a duplicate check for existing URLs? Could be added to the "Cleanups" section? |
Sounds like a lot of work for something that should no longer be necessary, since duplicates are checked in 0.14 onward. So there shouldn't be any new duplicates added anymore and for existing duplicates, using the cli to quickly check for duplicates might be good enough? |
Sure, you're right that sounds good enough. |
Each time the browser extension is clicked, a new bookmark is created. If the page has been previously bookmarked, a new bookmark shouldn't be created, or user should be prompted (if there is a need for duplicates - not sure why that may be useful though?)
The text was updated successfully, but these errors were encountered: