Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iD should process and apply data returned by the diff upload #1646

Open
iandees opened this issue Jul 24, 2013 · 11 comments
Open

iD should process and apply data returned by the diff upload #1646

iandees opened this issue Jul 24, 2013 · 11 comments
Labels
bluesky Bluesky issues are extra challenging - this might take a while or be impossible new-feature A new feature for iD

Comments

@iandees
Copy link
Collaborator

iandees commented Jul 24, 2013

When an iD user saves data, the current graph and its history are cleared. A download of the same area immediately follows to load data from the API back in to the freshly-cleaned graph.

This all happens within a second or two of a successful diff upload, not giving much time for any replication that might happen to complete. Since the OSM API is looking to scale horizontally, replication delay will start to play a factor soon.

iD should consume the response from the server and apply it to the graph.

See also:

@jfirebaugh
Copy link
Member

Agreed, however the current behavior is a simple and decent solution to several issues that are otherwise difficult:

  • Merge/conflict resolution. The reload is the only mitigation against conflicts in long multi-save edit sessions.
  • Memory management/intelligent data unloading. The reload is the only mitigation against ever-increasing memory consumption in long sessions where lots of data has been loaded but is no longer in view.

@pnorman
Copy link
Contributor

pnorman commented Aug 6, 2013

Memory management/intelligent data unloading. The reload is the only mitigation against ever-increasing memory consumption in long sessions where lots of data has been loaded but is no longer in view.

If you were to reload the data then apply the it would work ensure that the data you have contains the changes you uploaded. There are probably other ways to free memory.

Make no mistake, the current behaviour will break when katla is brought online and requests are served from either server.

@jfirebaugh
Copy link
Member

What's the timeline for deploying katla?

@pnorman
Copy link
Contributor

pnorman commented Aug 21, 2013

I'm not sure. Presumably right after we make iD default :)

cc @Firefishy

@pnorman
Copy link
Contributor

pnorman commented Aug 22, 2013

02:12 < pnorman> Firefishy: any idea on a timeline of katla? (e.g. 1 week, 1 month, etc)
02:13 < Firefishy> pnorman: 1 month.
02:13 < Firefishy> ~

@tomhughes
Copy link
Member

I'm not sure why you think katla is going to be such an issue? The replication delay we see in the current test environment replicating to smaug is essentially zero...

@pnorman
Copy link
Contributor

pnorman commented Aug 23, 2013

postgres_replication_9_1_main-day

It fairly regularly gets above 5 seconds

@ToeBee
Copy link
Contributor

ToeBee commented Feb 18, 2014

I hit this yesterday. I changed the name of a restaurant and when iD refreshed, the old name was still there. After I did a page refresh in the browser a few seconds later, the new name was there.

@dekstop
Copy link

dekstop commented Aug 5, 2015

Just to give some more feedback on the impact of this -- we had a big Missing Maps mapathon in London last night with ~100 attendees, most using iD. On several occasions the OSM API was overloaded and slow with processing contributions, with a replication lag of up to 5 minutes.

This means that as soon as people hit "save" their just-mapped buildings seemingly vanished. This affects people's workflows (they have to remember where they already mapped, and may ever miss things or double-map some features), and it's particularly confusing for newcomers.

It is not the first large mapathon where it happened either, and I expect it will become a recurring issue as HOT and Missing Maps grow their communities.

@pnorman
Copy link
Contributor

pnorman commented Aug 5, 2015

On several occasions the OSM API was overloaded and slow with processing contributions, with a replication lag of up to 5 minutes.

image

Just so we have accurate numbers, it hit 4:30 lag once on the Tuesday, and there doesn't seem to have been any elevated load on the API.

That's not to say that you weren't having problems with this bug, excursions past 5s replication lag are normal, just that 100 attendees were not causing any different API behavior.

@bhousel bhousel added new-feature A new feature for iD bluesky Bluesky issues are extra challenging - this might take a while or be impossible labels Dec 18, 2016
@mmd-osm
Copy link
Contributor

mmd-osm commented Apr 29, 2018

Another option would be to keep the diff results around for a few seconds, reload the data via /map and then cross check /map response with our diff results.

If diff results indicate that object x should have version y, and /map still returns version y-1 then things are out of sync. In this case we could wait a bit and try another /map call. If all is good simply discard the diff results in memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bluesky Bluesky issues are extra challenging - this might take a while or be impossible new-feature A new feature for iD
Projects
None yet
Development

No branches or pull requests

8 participants