Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial support for ACMS docket reports. #3193

Merged
merged 40 commits into from
Apr 23, 2024

Conversation

johnhawkinson
Copy link
Contributor

Initial support for ACMS.

This probably needs the juriscraper stub to be committed: freelawproject/juriscraper#730

I do not have a courtlistener test environment available, and I'm focusing on the juriscraper and recap-chrome parts of this work. Can someone else deal with getting this up and running? I hope it is simple and mechanical.

Copy link
Member

@mlissner mlissner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty good to me, but for a couple things:

  1. Lots of repetition with other functions. Do we want to merge these?
  2. We need migration files per https://github.com/freelawproject/courtlistener/wiki/Database-migrations

@johnhawkinson, I hear you that you don't have a CL env. set up, so @albertisfu once you've got your elastic PR in, could you adopt this one and see what it takes to get it landed?

@albertisfu
Copy link
Contributor

Sure, I'm starting to look at this, I'm catching up on the conversation on Slack and I'll post any questions I have in order to get this landed.

@johnhawkinson
Copy link
Contributor Author

@albertisfu perhaps unwisely, I removed your merge commit and rebased off the main branch, because the merge doesn't really belong in the PR. I hope that doesn't mess up your merging.

Anyhow, I just added a relaxation of the pacer_case_id constraint as to number of dashes, after a conversation with Mike in #recap.

@albertisfu
Copy link
Contributor

albertisfu commented Oct 2, 2023

I think the next step is that I'll add some tests for this new upload but before some comments about after analyzing this new docket report:

From the ACMS docket report, we can extract both the docket data and all the docket entries from the JSON. We could then map this data to the DB model:

Docket data

"caseId":"e15ebc78-9507-4639-8a61-4bc42e613a66" - > pacer_case_id

"caseNumber":"23-6364" -> docket_number

"name":"United States of America v. Surmik" -> case_name

"caseOpened":"2023-04-19" -> date_filed

"aNumber":null - > Not in DB

"receivedDate":"2023-04-19T13:57:24Z" - > Not in DB

"partyAttorneyList":"" parties this would need some parsing in Juriscraper since it's in HTML format.

"court":{
"name":"N.D.N.Y. (SYRACUSE)",
"identifier":"N.D.N.Y. (SYRACUSE)"
} is the appeal_from court?

"caseType":"Criminal, is this case_type_information?

"caseSubType":"Post-Conviction", do we have this in DB?

"caseSubSubType":null do we have this in DB?

"districtCourtName":null Do we have this in DB?

"feeStatus":"IFP Granted" Do we have this in DB?

Docket Entries

We can get entries with these fields from the JSON:

"endDate":"2023-04-14" → DocketEntry date_filed/date_entered

"endDateFormatted":"04/14/2023" → DocketEntry date_filed/date_entered

"entryNumber":1, → DocketEntry/RECAPDocument document_number

"docketEntryText":"NOTICE OF CRIMINAL APPEAL, with the district court docket, on behalf of Appellant William Surmik, FILED. [Entered: 04/19/2023 10:26 AM]", → DocketEntry description

"docketEntryId":"bde556a7-bdde-ed11-a7c6-001dd806a1fd" → RECAPDocument pacer_doc_id

"createdOn":"2023-04-19T14:22:56Z", Not in DB

"documentCount":2, → (Number of documents, 0 a minute entry, 1 only main document, 2 or greater main document + attachments, we might not need this.)

"pageCount":12, → RECAPdocument page_count

"fileSize":410, RECAPdocument file_size

"restrictedPartyFilingDocketEntry":false, Not in DB

"restrictedDocsAvailable":false, Not in DB

Discussing this with @ERosendo we think we could use the docketEntryId and store it as the pacer_doc_id, however, we'd need to increase its current max_length since it's currently 32 and docketEntryId are 36 length.

However, we should consider that docketEntryId is the same for all the documents that belong to a docket entry, for the main document, and for all their attachments (if there are any.)

Then we should be able to match the document uploads/document availability to the right DocketEntries/RECAPDocuments.
If you click on an entry with only one document we could use the docketEntryId which is in the download URL so we can match it with the correct RECAPDocument in CL.

Screenshot 2023-10-02 at 18 28 56

However, the situation is more complicated when there are attachments.
In this case, when you click any of the documents you’d get the same docketEntryId in the download URL for all of them. So we can’t use the docketEntryId in order to identify the document being downloaded/available in CL.

Screenshot 2023-10-02 at 18 02 36

Until you click one of them and the download page is opened, you’ll get the selectedDocuments key that contains the docketDocumentDetailsId which is the unique identifier for documents when the entry has more than one document.
Screenshot 2023-10-02 at 18 39 33

So a workaround that @ERosendo suggested for this issue is that we can use the docketEntryId + the document/attachment number in order to identify available documents in CL or the document being downloaded.

We think this can work and we can avoid adding an additional field for storing docketDocumentDetailsId.

We just need to confirm this approach with more examples, could you provide more examples for this and other courts where this report is available?

About Juriscraper, If I understand correctly, we would obtain the JSON from the extension. We could then store this JSON in CL as a file for backup, similar to how we store HTML files. Juriscraper would then parse the PACER JSON and convert it into a format compatible with CL JSON, correct?

@johnhawkinson
Copy link
Contributor Author

From the ACMS docket report, we can extract both the docket data and all the docket entries from the JSON. We could then map this data to the DB model:

I think most of this is properly in juriscraper, and freelawproject/juriscraper#730 doesn't really attempt to do so, since the parsing code hasn't been written.

Docket data

"court":{ "name":"N.D.N.Y. (SYRACUSE)", "identifier":"N.D.N.Y. (SYRACUSE)" } is the appeal_from court?

Yes.

Docket Entries

"endDate":"2023-04-14" → DocketEntry date_filed/date_entered
"endDateFormatted":"04/14/2023" → DocketEntry date_filed/date_entered
"createdOn":"2023-04-19T14:22:56Z", Not in DB

Err, don't we store times now? I thought we did, for items received from RSS.

ACMS may not expose a date_filed/date_entered distinction that I am aware of (I'm not sure—I think Appellate CM/ECF only exposes it in NDA emails, and I haven't seen any from ACMS). But it does appear that the createdOn field is consistent with the submission time of the entry and the Entered text in the docketEntryText reflects the actual time it was entered, accounting for processing delay, which for ACMS can be several minutes (up to fifteen, perhaps, allegedly), and which was basically instantaneous in CMECF. Which is a long way to say I'm not sure which date we should be storing, but the I think we should probably use the DateTime parsed back out of the docket tet.

"pageCount":12, → RECAPdocument page_count

"fileSize":410, RECAPdocument file_size

I don't think this is what that is. It's certainly not the size in bytes. (I guess it could be in kilobytes?).

Discussing this with @ERosendo we think we could use the docketEntryId and store it as the pacer_doc_id, however, we'd need to increase its current max_length since it's currently 32 and docketEntryId are 36 length.

Hilariously, we could shoehorn it in by removing the hyphens from the GUID before storing it.
That's probably unwise, we should just increase the length.

However, the situation is more complicated when there are attachments. In this case, when you click any of the documents you’d get the same docketEntryId in the download URL for all of them. So we can’t use the docketEntryId in order to identify the document being downloaded/available in CL.

We think this can work and we can avoid adding an additional field for storing docketDocumentDetailsId.

This is a problem that we have in the district court, too. For instance, https://ecf.dcd.uscourts.gov/doc1/045010205474 is the main document in DE 101 in https://www.courtlistener.com/docket/61642105/101/freeman-v-herring-networks-inc/, but 045010205474 is also the aggregation of all 8 attachments and the main document.

I guess the problem is a little different here, but related.
I think it would be unwise to throw away the attachment ID information where we have access to it.

But maybe that's not what you're proposing.

So a workaround that @ERosendo suggested for this issue is that we can use the docketEntryId + the document/attachment number in order to identify available documents in CL or the document being downloaded.

Saving the docket number doesn't preclude this. I guess we would need a new CL lookup query if we went this route?

We just need to confirm this approach with more examples, could you provide more examples for this and other courts where this report is available?

What kind of examples do you want? ca2 and ca9 are now using this for all new cases as of yesterday, and have been using it for quite some time (a year?) for immigration cases.

https://ca9-showdoc.azurewebsites.us/23-2487 is an example of a case docketed today in the 9th circuit.
https://ca2-showdoc.azurewebsites.us/23-7222?caseId=1003904 is an example of a case docketed today in the 2nd circuit (err, with the questionable caseId addition that the recap extension applies).

About Juriscraper, If I understand correctly, we would obtain the JSON from the extension. We could then store this JSON in CL as a file for backup, similar to how we store HTML files. Juriscraper would then parse the PACER JSON and convert it into a format compatible with CL JSON, correct?

Correct.

@albertisfu
Copy link
Contributor

Err, don't we store times now? I thought we did, for items received from RSS.

ACMS may not expose a date_filed/date_entered distinction that I am aware of (I'm not sure—I think Appellate CM/ECF only exposes it in NDA emails, and I haven't seen any from ACMS). But it does appear that the createdOn field is consistent with the submission time of the entry and the Entered text in the docketEntryText reflects the actual time it was entered, accounting for processing delay, which for ACMS can be several minutes (up to fifteen, perhaps, allegedly), and which was basically instantaneous in CMECF. Which is a long way to say I'm not sure which date we should be storing, but the I think we should probably use the DateTime parsed back out of the docket tet.

Yeah, now we can store time for docket entries, I was confused due to endDate didn't match to createdOn so I thought it was something different. But now that you mentioned that createdOn is the date_entered we could store this date and time as we do for RSS entries in which the date and time stored are the date_entered.
If so, dow we should just ignore endDate?

Hilariously, we could shoehorn it in by removing the hyphens from the GUID before storing it.
That's probably unwise, we should just increase the length.

Yeah I agree, it's better to increase the length in pacer_doc_id.

I think it would be unwise to throw away the attachment ID information where we have access to it.

Got it, so, in that case, we could store the docketDocumentDetailsId as a new field in the RECAPDocument model, maybe we could call it pacer_doc_detail_id if @mlissner agrees. Then we could include this field in the ProcessingQueue model, so we can use it to match uploads.

Saving the docket number doesn't preclude this. I guess we would need a new CL lookup query if we went this route?

Yeah, the docket number doesn't help here. So the idea was to use the pacer_doc_id (docketEntryId) and tweak the recap-query response to return the document/attachment numbers and use them to match available documents by the extension using the docketEntryId and document/attachment number.

As an alternative, also @ERosendo found that it's possible to get the docketDocumentDetailsId from a div in the attachment page that contains vue properties with all the document data including the docketDocumentDetailsId. He mentioned is a bit hacky but it can work since he already has some code for extracting those properties here: johnhawkinson/recap-chrome#1

https://ca9-showdoc.azurewebsites.us/23-2487 is an example of a case docketed today in the 9th circuit.
https://ca2-showdoc.azurewebsites.us/23-7222?caseId=1003904 is an example of a case docketed today in the 2nd circuit (err, with the questionable caseId addition that the recap extension applies).

Thanks, yeah these examples are fine. So if we'll only receive ACMS dockets from courts that use docket entry numbers (like ca2 and ca9), that seems fine. Because if we expect to receive an ACMS where their docket entries don't have numbers. That will be an additional issue we need to handle since in those cases we use the pacer_doc_id as an entry number and in these entries, the pacer_doc_id will be the docketEntryId which is not a number so it can't be stored in entry_number which is an integer.

Let me know your thoughts about these comments so if you agree I can start working on mocking some data for an ACMS docket and adding tests for ingesting the docket/docket entries.

@johnhawkinson
Copy link
Contributor Author

johnhawkinson commented Oct 3, 2023

Let me know your thoughts about these comments so if you agree I can start working on mocking some data for an ACMS docket and adding tests for ingesting the docket/docket entries.

Best is the enemy of good enough. This went live on Sunday and we're missing new appellate cases, so I'm trying to move with alacrity.

Yeah, now we can store time for docket entries, I was confused due to endDate didn't match to createdOn so I thought it was something different. But now that you mentioned that createdOn is the date_entered we could store this date and time as we do for RSS entries in which the date and time stored are the date_entered.

That is…not exactly what I said:

But it does appear that the createdOn field is consistent with the submission time of the entry and the Entered text in the docketEntryText reflects the actual time it was entered, accounting for processing delay, which for ACMS can be several minutes (up to fifteen, perhaps, allegedly), and which was basically instantaneous in CMECF. Which is a long way to say I'm not sure which date we should be storing, but the I think we should probably use the DateTime parsed back out of the docket te[x]t.

I don't think we should be using createdOn at all. But this is a question for the JSON parser and one we can revisit in the future and reparse old data, so we don't have to decide now.

If so, dow we should just ignore endDate?

We should ignore almost all of them and use the date/time parsed out of the Entered text field, unfortunately.

Got it, so, in that case, we could store the docketDocumentDetailsId as a new field in the RECAPDocument model, maybe we could call it pacer_doc_detail_id if @mlissner agrees. Then we could include this field in the ProcessingQueue model, so we can use it to match uploads.

As you are probably aware, I could write a treatise on naming.
I think that:

  • Nobody should be calling anything pacer_* because, "PACER" is too confusing a term that can refer to the generalized single-sign on infrastructure, to the nationwide billing system, to the PACER Case Locator formerly US National Case/Party Index, to individual court-specific CM/ECF instances, and perhaps, as you would be suggesting, to ACMS. And to overlaps of any of these things. There is almost always a better more-specific (or at least more unambiguous) name to use.
  • Is this field intended to store ACMS document IDs and but store nothing in Appellate CM/ECF and District (and BK) CM/ECF? If so, it should probably have an acms_* name.
  • I recognize we have historically named things pacer_* where we perhaps should have called them ecf_*, and it is nice to not change horsesconventions in midstream.
  • I am not sure it's great to choose names based on the AOUSC programmers' implementation names; sometimes that's good for seeing the 1:1 correspondence, but "details ID" doesn't really express that this is a unique ID for a document.

Anyhow, I would call it acms_document_guid.

So if we'll only receive ACMS dockets from courts that use docket entry numbers (like ca2 and ca9), that seems fine.

We do not know the future with perfect clarity, but I think it is unlikely any other courts will adopt ACMS in the current calendar year, if they ever do (not a foregone conclusion!).

Because if we expect to receive an ACMS where their docket entries don't have numbers.

It's also not clear that ACMS does or will in the future support the concept of unnumbered docket entries.

That will be an additional issue we need to handle since in those cases we use the pacer_doc_id as an entry number and in these entries, the pacer_doc_id will be the docketEntryId which is not a number so it can't be stored in entry_number which is an integer.

Hrmm. As long as we are changing schema (lengthening pacer_doc_id ), maybe we should consider changing this to a character field:

    entry_number = models.BigIntegerField(
        help_text=(
            "# on the PACER docket page. For appellate cases, this may "
            "be the internal PACER ID for the document, when an entry "
            "ID is otherwise unavailable."
        ),

I guess we can bridge that gap when we come to it?

@johnhawkinson johnhawkinson force-pushed the 2023.09.28.acms branch 2 times, most recently from d0c76c9 to b1c5ced Compare October 3, 2023 15:40
@johnhawkinson
Copy link
Contributor Author

Oops, so, I think my force-push/rebase confused Github's tracking of @mlissner's review comments. I tried force-pushing back to d0c76c9 but that did not make them show up again, so I re-force-pushed back to where we were. Hopefully this confuses no humans, only computers.

This looks pretty good to me, but for a couple things:

I addressed these in Slack on Sept. 28 but should have done so here:

1. Lots of repetition with other functions. Do we want to merge these?

In a perfect world, yes. This maintains the same level of duplication found between appellate dockets and district court dockets, and seems like it is too much, but also it would be work to break it up and abstract it better. Of course the case for that work increases when you have things in triplicate rather than duplicate, but I did not see an easy way for me to tackle that task without a lot more uncertainty than I already have about this code that I'm not in a good position to test.
(Maybe it is unrealistic for me to think I can do recap development without standing up a test courtlistener server to point it at? If so someone should disabuse me of my presumptions.

2. We need migration files per https://github.com/freelawproject/courtlistener/wiki/Database-migrations

OK, that one's on Alberto (well, they both are, really).

@albertisfu
Copy link
Contributor

I guess we can bridge that gap when we come to it.

Yeah, we can tweak entry_number only if it's required in the future.

Well for now I'll work on increasing the length of pacer_doc_id and adding the new field acms_document_guid, I'll add the migration files and some tests.

@johnhawkinson
Copy link
Contributor Author

@albertisfu not sure that tagging you in the commit text for 9a874cb had the intended result, but there's a help_text typo to fix in the same migration as long as you're doing one, if it's not too much trouble.

@mlissner
Copy link
Member

mlissner commented Oct 3, 2023

Thank you guys for all the discussion here. I've been mostly waiting for you guys to lead me, but I do have a couple thoughts (none about code, apparently!):

Best is the enemy of good enough.

Yes, but good enough is the enemy of great. If we've got our eye on this now and have the energy to do so, I'm very much in favor of doing everything the right way so we don't have to come back around and fix things. The sad reality is we're pretty bad at coming around later to fix things, particularly things that are good enough.

We can revisit in the future and reparse old data,

This freaks me out and I'd like to emphasize that I don't want to do this. One issue is that any data we get has to be re-processed in the order it's received, including across object types. So if we get a docket, then an attachment page, then a docket, we can't just re-process the dockets alone, we'll have to do all three, in the correct order. This is partly because the data can change under us, and partly because it's an assumption baked into the merging code — I don't know what would happen if we ignored this assumption.

There is also a small amount of manual work that gets done in the DB, to, for example, deal with sealed content and other issues like that. When we re-process data, we tend to wipe out those manual changes.

It's also just work I don't want to do (my time is at an extreme premium and I tend to be the one that has to do it). We should make every effort to get it right, even at a time-delay, so we can avoid having to re-process data.

force-push/rebase confused Github

Yeah, Github doesn't do well with this. The worst part is, it breaks a feature that lets reviewers see changes since their last review. This is getting annoying enough that I'm getting close to blocking force pushes on all branches.

That's it for me for now. Thank you guys again. I appreciate this moving forward at such a clip.

@johnhawkinson
Copy link
Contributor Author

Yes, but good enough is the enemy of great.

Is it? In any case, I'm not here to define "good enough," just to say that at least from my perspective, I wouldn't hold up work mocking things based on the level of open questions we have here.

The sad reality is we're pretty bad at coming around later to fix things, particularly things that are good enough.

I guess this is about issue tracking and assignment and stuff.

We can revisit in the future and reparse old data,

This freaks me out and I'd like to emphasize that I don't want to do this.

I hear you. I could have been a bit more clear — we don't really understand what many of these fields are and whether they have any value (remember they are not displayed to the user in the UI), and we would need database model changes to store them. I think we don't need them, but if we do, there is always a way to come back and get them.
The alternative is adding them to the db model even if we're not clear on what they are used for, or even adding an extra_shit JSON blob to the database. I bet your DBA and DQA twins would both love that!

force-push/rebase confused Github

Yeah, Github doesn't do well with this. The worst part is, it breaks a feature that lets reviewers see changes since their last review. This is getting annoying enough that I'm getting close to blocking force pushes on all branches.

Yeah. I thought I was safe because (I thought), your comments were not actually tied to code. I guess I'll try clicking the magic circle arrow red button and see if that resolves the review state (in)consistency.

That's it for me for now.

You did not speak to the T.S. Eliot question:

Got it, so, in that case, we could store the docketDocumentDetailsId as a new field in the RECAPDocument model, maybe we could call it pacer_doc_detail_id if @mlissner agrees. Then we could include this field in the ProcessingQueue model, so we can use it to match uploads.

As you are probably aware, I could write a treatise on naming. I think that:

Anyhow, I would call it acms_document_guid.

Thank you guys again. I appreciate this moving forward at such a clip.

Har. I have no point, I just want to say: JSON Tub Time Machine…countless screaming Argonauts.

@mlissner
Copy link
Member

mlissner commented Oct 3, 2023

Thanks for the comments.

For the field, I think between you and Alberto, I don't have much to add. The name seems fine and reasonable to me. If you guys agree we need it and agree that the name is good, I'm +1 to whatever you decide.

@johnhawkinson
Copy link
Contributor Author

To the limited extent that I can do so with my eyeballs, these two recent changes (37d5c1a f5dde3a) look correct, yes, modulo the failed migration test which looks like it's just been dealt with. 👍

@albertisfu
Copy link
Contributor

Yeah, I added the following changes:

  • pacer_doc_id length increased to 64 in AbstractPacerDocument and ProcessingQueue

  • Added the new acms_document_guid field in AbstractPacerDocument and ProcessingQueue
    Since I added acms_document_guid in AbstractPacerDocument this new field will be also added to ClaimHistory (in addition to RECAPDocument) since it depends on AbstractPacerDocument, is that correct? Like if there is a possibility of getting ClaimHistory documents for ACMS? or should we add it to the RECAPDocument model directly instead?

  • Fixed other pacer_case_id typos in Search models.

  • Merged migrations in a single file for each app (search and recap).

0023_search_models_update.py with regular tables, event tables, and triggers.
0023_search_models_update.sql for our replica, regular tables, and event tables, but no triggers.
0023_search_models_update_customers.sql only regular tables for customer replicas.

@johnhawkinson
Copy link
Contributor Author

  • Added the new acms_document_guid field in AbstractPacerDocument and ProcessingQueue
    Since I added acms_document_guid in AbstractPacerDocument this new field will be also added to ClaimHistory (in addition to RECAPDocument) since it depends on AbstractPacerDocument, is that correct? Like if there is a possibility of getting ClaimHistory documents for ACMS? or should we add it to the RECAPDocument model directly instead?

Oops, yes, I think you're right to raise this concern.

The ClaimHistory only exists in bankruptcy court, and to my knowledge it does not appear in appeals of bankruptcy cases, not at the district court level, not at the Bankruptcy Appellate Panel level, and not at the Court of Appeals level. So I do not think we would ever expect to see an acms_document_guid in a ClaimHistory and the field does not belong, and it should probably be added directly to RECAPDocument.

(The 'A' in ACMS stands for 'Appellate'.)

@mlissner
Copy link
Member

mlissner commented Oct 3, 2023

Yeah, I second that. We don't need the new field for claims docs.

cl/search/models.py Outdated Show resolved Hide resolved
@albertisfu
Copy link
Contributor

Correct, I've moved acms_document_guid directly into RECAPDocument and fixed its help text.

@johnhawkinson
Copy link
Contributor Author

Oops, clearly that failing test was my fault. I just tried a blind fix, but if that doesn't do it, someone with a dev environment should take it on.

I am not clear whether Alberto was planning to do more, or we were waiting for Mike's re-review, or something else. Clarity on who has the next step would be good. Thanks.

@mlissner
Copy link
Member

mlissner commented Oct 4, 2023

Alberto has this on his plate.

@albertisfu
Copy link
Contributor

I've added a test (test_processing_an_acms_docket) that mocks the data we expect from Juriscraper in MockACMSDocketReport. For now, this test verifies that the long pacer_case_id and pacer_doc_id are correctly stored in the database.

Once the Juriscraper parser is complete, please let me know, and I can update this test to replace the mock data with the actual data returned by this parser.

@johnhawkinson
Copy link
Contributor Author

Once the Juriscraper parser is complete, please let me know, and I can update this test to replace the mock data with the actual data returned by this parser.

Yes, I definitely shall. Is there anything blocking this from landing?

Copy link
Contributor

@albertisfu albertisfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the branch with main. This seems ready to me!

@mlissner
Copy link
Member

@johnhawkinson do you want to give this another look or should we just go for it?

@ERosendo
Copy link
Contributor

@mlissner :shipit:

@mlissner mlissner merged commit 307f4c1 into freelawproject:main Apr 23, 2024
9 checks passed
@mlissner
Copy link
Member

Merged! I agree with Eduardo that this is cooked enough. If we need more changes once this hits prod, we'll do that, but let's get this big thing under our belt and move forward.

@blancoramiro, this will need some clever deployment. Let's see if we can do that today or tomorrow, since all other deploys will be blocked until we get it in (but no urgent code is expected).

@mlissner
Copy link
Member

Thank you all for your contributions here. This one was a real team effort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

5 participants