Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plenoptic #150

Open
16 of 30 tasks
billbrod opened this issue Nov 17, 2023 · 42 comments
Open
16 of 30 tasks

Plenoptic #150

billbrod opened this issue Nov 17, 2023 · 42 comments

Comments

@billbrod
Copy link

billbrod commented Nov 17, 2023

Submitting Author: Billy Broderick (@billbrod)
All current maintainers: @billbrod (there are several other authors / contributors, but I'm the only committed to maintaining)
Package Name: Plenoptic
One-Line Description of Package: a python library for model-based synthesis of perceptual stimuli
Repository Link: https://github.com/plenoptic-org/plenoptic
Version submitted: v1.0.2
Editor: @sneakers-the-rat
Reviewer 1: @NickleDave
Reviewer 2: @DanielaPamplona
Reviewers assigned: 2023-12-13
Reviews due: 2024-01-19
Archive: TBD
JOSS DOI: TBD
Version accepted: TBD
Date accepted (month/day/year): TBD


Code of Conduct & Commitment to Maintain Package

Description

  • Include a brief paragraph describing what your package does: plenoptic provides tools to help researchers understand their model by synthesizing novel informative stimuli (e.g., images, movies, or sound clips, depending on the model), which help build intuition for what features the model ignores and what it is sensitive to.

Scope

  • Please indicate which category or categories.
    Check out our package scope page to learn more about our
    scope. (If you are unsure of which category you fit, we suggest you make a pre-submission inquiry):

    • Data retrieval
    • Data extraction
    • Data processing/munging
    • Data deposition
    • Data validation and testing
    • Data visualization1
    • Workflow automation
    • Citation management and bibliometrics
    • Scientific software wrappers
    • Database interoperability

Domain Specific & Community Partnerships

- [ ] Geospatial
- [ ] Education
- [ ] Pangeo

Community Partnerships

If your package is associated with an
existing community please check below:

  • For all submissions, explain how the and why the package falls under the categories you indicated above. In your explanation, please address the following points (briefly, 1-2 sentences for each):

I am not sure which of the above categories I fit under, but the discussion in my pre-submission inquiry (#97) assured me that I'm in scope.

  • Who is the target audience and what are scientific applications of this package?

Researchers in vision science, neuroscience, and machine learning. The goal is to generate novel stimuli (images, videos, audio) that researchers can then use in new experiments to better understand their computational models.

  • Are there other Python packages that accomplish the same thing? If so, how does yours differ?

Not that I'm aware of. There are several bits of related research code, but no packages. Research code includes: https://github.com/ArturoDeza/Fast-Texforms, https://github.com/brialorelle/texformgen, https://github.com/ProgramofComputerGraphics/PooledStatisticsMetamers, https://github.com/freeman-lab/metamers/, among others.

  • If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted:

#97 , @NickleDave

Technical checks

For details about the pyOpenSci packaging requirements, see our packaging guide. Confirm each of the following by checking the box. This package:

  • does not violate the Terms of Service of any service it interacts with.
  • uses an OSI approved license.
  • contains a README with instructions for installing the development version.
  • includes documentation with examples for all functions.
  • contains a tutorial with examples of its essential functions and uses.
  • has a test suite.
  • has continuous integration setup, such as GitHub Actions CircleCI, and/or others.

Publication Options

JOSS Checks
  • The package has an obvious research application according to JOSS's definition in their submission requirements. Be aware that completing the pyOpenSci review process does not guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
  • The package is not a "minor utility" as defined by JOSS's submission requirements: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
  • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/.
  • The package is deposited in a long-term repository with the DOI:

Note: JOSS accepts our review as theirs. You will NOT need to go through another full review. JOSS will only review your paper.md file. Be sure to link to this pyOpenSci issue when a JOSS issue is opened for your package. Also be sure to tell the JOSS editor that this is a pyOpenSci reviewed package once you reach this step.

Does the JOSS and pyOpenSci review happen simultaneously? I was going to add the paper.md and submit to JOSS after going through this review.

Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?

This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.

  • Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.

Confirm each of the following by checking the box.

  • I have read the author guide.
  • I expect to maintain this package for at least 2 years and can help find a replacement for the maintainer (team) if needed.

Please fill out our survey

P.S. Have feedback/comments about our review process? Leave a comment here

Editor and Review Templates

The editor template can be found here.

The review template can be found here.

Footnotes

  1. Please fill out a pre-submission inquiry before submitting a data visualization package.

@NickleDave
Copy link
Contributor

Hi @billbrod, thank you for your detailed submission.

Just letting you know we are working on getting the review started.

@sneakers-the-rat will be the editor and they are going to do the initial checks.

We're still looking for one reviewer (the other one is me 🙂 -- sorry @sneakers-the-rat for stealing your thunder 😝 )

@sneakers-the-rat
Copy link
Collaborator

sneakers-the-rat commented Dec 5, 2023

Editor response to review:


Editor comments

👋 Hi @NickleDave and @DanielaPamplona Thank you for volunteering to review for pyOpenSci!

Please fill out our pre-review survey

Before beginning your review, please fill out our pre-review survey. This helps us improve all aspects of our review and better understand our community. No personal data will be shared from this survey - it will only be used in an aggregated format by our Executive Director to improve our processes and programs.

  • reviewer 1 survey completed.
  • reviewer 2 survey completed.
  • reviewer 3 (if applicable)

The following resources will help you complete your review:

  1. Here is the reviewers guide. This guide contains all of the steps and information needed to complete your review.
  2. Here is the review template that you will need to fill out and submit
    here as a comment, once your review is complete.

Please get in touch with any questions or concerns! Your review is due: Friday, January 5th, 2024

Reviewers: @NickleDave and @DanielaPamplona
Due date: Friday, January 5th, 2024

@sneakers-the-rat
Copy link
Collaborator

We have located reviewers! Thank you @NickleDave and @DanielaPamplona for your time, and looking forward to digging into this package with you all!

Please see the above post with links to the reviewer's guide and next steps.

The next step will be to copy the review checklist template as a comment here, and as you work through the checklist the authors have indicated it is ok to raise issues and make pull requests with problems and questions as they arise. Since github issues don't have threading, we will use issues raised in the repo as "threads" to keep discussion orderly :). When you raise an issue or PR, please link back to this issue so that we can track issues raised in relationship to the review. It is up to the authors if they would like to tag the issues or name them in any specific way :).

The reviewers should feel free to split up different focuses or roles depending on what plays to your strengths - both will complete the basic checklist, but you can decide if you'd rather focus on docs, tests, internal architecture, onboarding, and so on for more detailed focus. The authors can also let the reviewers know if there are any specific areas they would like feedback on or close attention on.

Let's shoot for reviews in Friday, January 5th, 2024 - gl;hf to all ♥️

@billbrod
Copy link
Author

Thanks all, looking forward to this process! While I'm looking forward to all kinds of feedback, I am specifically interested in feedback on the docs, especially the motivation for the included methods and explaining what they can be used for scientifically -- I think that's the weakest (and most important?) part of our documentation right now. Since all the primary authors for the project are from the same lab, we talked a lot offline and have a similar background.

@DanielaPamplona
Copy link

Hi,
Just to let you know that I am looking forward to this. I am silent because I did not manage to start the review, but as soon as I start, I am sure I will have tons of things to say.
Cheers,

@sneakers-the-rat
Copy link
Collaborator

This is ur friendly neighborhood editor here to say that I hope the authors and reviewers are taking a much deserved break, and to keep a reasonable and humane timeline I am making the call to reset the review window to start Jan 1 - Lets say reviews in January 19th, 2024

Have a lovely rest of the holiday

@sneakers-the-rat
Copy link
Collaborator

Hello dear reviewers @NickleDave @DanielaPamplona - checking in and seeing how things are going, if any guidance is needed or if i can be of any help getting reviews rolling!

@DanielaPamplona
Copy link

DanielaPamplona commented Jan 9, 2024 via email

@sneakers-the-rat
Copy link
Collaborator

sneakers-the-rat commented Jan 10, 2024

@DanielaPamplona

I do have a question... What is a "vignette" ?

Completely fair question. That's a term from R (see Vignettes), so more than reasonable to have not seen it before. They are sort of like longer, narrative code examples, but they don't have a strict definition - if it's easier, you can think of them as synonymous with examples.

If one were to draw the distinction, an example might look like something you'd find in a docstring like:

(show example)
def myFunction(arg):
    """
    Do a function thing

    Args:
        arg: The thing we give this function

    Example:
        To use this function, give it an argument!

        >>> arg = 1
        >>> print(myFunction(arg))
        5

        Any argument will do!

        >>> arg = 'new argument'
        >>> print(myFunction(arg))
        5

    """
    # TODO: use the argument
    return 5

Where a vignette would be a longer-form demonstration of major functionality of the package, often with real-world use that show multiple things working together.

From a quick look, everything I see in the examples directory could be called a vignette, but I also don't think you need to split hairs too-too finely there :)

@DanielaPamplona
Copy link

DanielaPamplona commented Jan 10, 2024 via email

@NickleDave
Copy link
Contributor

Thank you for checking in @sneakers-the-rat and keeping us on track -- no questions from me yet!
I just started a new job but have this on my to-do list and expect to get through the checklist this week, will raise issues with @billbrod if needed as I do and link them to this

@billbrod
Copy link
Author

Thanks all! I've got a workshop in 2 weeks (for a different software package) that I'm franticly preparing for, so I most likely won't make much progress on these revisions before then. So no rush Daniela!

@DanielaPamplona
Copy link

Hi,
So here is my review. Sorry for the delay.


Legend:
Y = Yes
N = No
~ = Not applicable
+/- = more or less/ needs improvement

Package Review

Please check off boYes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • [Y] As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • [Y] A statement of need clearly stating problems the software is designed to solve and its target audience in README.
  • [Y] Installation instructions: for the development version of the package and any non-standard dependencies in README.
  • [~] Vignette(s) demonstrating major functionality that runs successfully locally.
  • [Y] Function Documentation: for all user-facing functions. [Did not search everything, random sampled 50% and it was always documented]
  • [+/-] Examples for all user-facing functions.[Did not search everything, random sampled 50% and sometimes was missing.]
  • [Y] Community guidelines including contribution guidelines in the README or CONTRIBUTING.
  • [Y] Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a pyproject.toml file or elsewhere.

Readme file requirements
The package meets the readme requirements below:

  • [Y] Package has a README.md file in the root directory.

The README should include, from top to bottom:

  • [Y] The package name
  • Badges for:
    • [Y] Continuous integration and test coverage,
    • [Y] Docs building (if you have a documentation website),
    • [N] A repostatus.org badge,
    • [Y] Python versions supported,
    • [Y] Current package version (on PyPI / Conda).

NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)

  • [Y] Short description of package goals. [I would emphatize that the fact that the package is not only to generate images, but also to analyse them. Also use the term "generate/generative" that is more appealing].
  • [Y] Package installation instructions.
  • [~] Any additional setup required to use the package (authentication tokens, etc.)
  • [+/-] Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.
  • [Y] Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear) [The quick start is not that "brief"]
  • [N] Link to your documentation website.
  • [~] If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
  • [Y] Citation information

Usability

Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole.
Package structure should follow general community best-practices. In general please consider whether:

  • [+/-] Package documentation is clear and easy to find and use. [The documentation webpage is not clear. See suggestion below]
  • [ +] The need for the package is clear
  • [ +/-] All functions have documentation and associated examples for use[Did not search everything, random sampled 50% and sometimes was missing.]
  • [Y] The package is easy to install

Functionality

  • [Y] Installation: Installation succeeds as documented.
  • [Y] Functionality: Any functional claims of the software been confirmed.
  • [ ~] Performance: Any performance claims of the software been confirmed.
  • Automated tests:
    • [Y] All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install. [Did not make automated tests, but passed all tests]
    • [Y] Tests cover essential functions of the package and a reasonable range of inputs and conditions.
  • [Y] Continuous Integration: Has continuous integration setup (We suggest using Github actions but any CI platform is acceptable for review)
  • Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
    A few notable highlights to look at:
    • [Y] Package supports modern versions of Python and not End of life versions.
    • [ Y] Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)

For packages also submitting to JOSS

  • [~] The package has an obvious research application according to JOSS's definition in their submission requirements.

Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.

The package contains a paper.md matching JOSS's requirements with:

  • [N] A short summary describing the high-level functionality of the software
  • [N] Authors: A list of authors with their affiliations
  • [N] A statement of need clearly stating problems the software is designed to solve and its target audience.
  • [N] References: With DOIs for all those that have one (e.g. papers, datasets, software).

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing:
8H

Review Comments

plenoptic is a python package for the generation and analysis of model-based visual stimuli.
The package is helpful and it contributes for better and FAIR science, and it is specially useful for researchers that are not experts in programming.
I found the package well-organized and clean. Please see the "checklist". Some things can be improved, especially on the documentation webpage. In general, I think the package is almost ready to be "published", the comments below are minor and easy to fix.

  • README:

    • Too many times written "help", you can use support, contribute, or any other synonym. I also think you should refer to the analysis/interpretation of the models part here.
    • On the GitHub page, some images have transparent background, so they look strange, like this one:
      Screenshot from 2024-01-04 18-00-38
  • Documentation webpage:https://plenoptic.readthedocs.io/
    Observations on the form

    1)The documentation webpage is confusing and not well organized, but it can be easily improved. A possible organization would be:

      ->Presentation
          -> Introduction
          -> Installation
          -> Quick start
          -> Citation Guide
    
      ->Modules
          -> Metric
          -> Simulate
          -> Synthesize
          -> tools
          -> Display and animate functions
          
      ->Tutorials and examples
          -> Eigendistortions 
          ...
          -> Reproducing Wand and Simoncelli
          -> Extending existing synthesis objects
          
      ->Advanced Usage
          -> Model requirements
          -> Synthesis object design
          -> Extending...
          -> API
          -> Tips and tricks
          -> Reproducibility
    

    In the section "Modules" it would be presented the general goal and content of the module and, eventually, some code snap-shots. Not all the modules need to be present, only the most relevant and/or difficult to use. The code snap-shots can be from the tutorials and examples already existing, In addition, there can be links between the modules and tutorial sections
    In the section "Tutorials and examples" would be all the tutorials already on the website.

    1. The tutorials/examples are too verbose. I would prefer something sharper. For instance:

      • In " Extending existing synthesis objects" it is written:

      "Once you are familiar with the existing synthesis objects included in plenoptic, you may wish to change some aspect of their function. For example, you may wish to change how the po.synth.MADCompetition initializes the MAD image or alter the objective function of po.synth.Metamer. While you could certainly start from scratch or copy the source code of the object and alter them directly, an easier way to do so is to create a new sub-class: an object that inherits the synthesis object you wish to modify and over-writes some of its existing methods."

      It could be written (this is an example, it is not to be taken literally):

      Functions of synthesis objects can be changed without altering the source code directly. For that, it is just necessary to create a new sub-class and over-write the existing methods. Bellow follows an example of changing the po.synth.MADCompetition initialization with the MAD image and the objective function of po.synth.Metamer.

      • The "Eigendistortions" section starts with "In this tutorial we will cover:...", I think this is a great way of starting, and all the tutorials should start like that.

    Observations on the contents
    * Installation: One should not mix pip and conda.
    * Installation: There is a problem when using plenoptic with jupyter notebook, see issue: problem when installing with jupyter notebook  plenoptic-org/plenoptic#242
    * Model requirements: Numerize instead of itemize, link to example. The citation method is not consistent with the one used in conceptual information.
    * Quick start: it is not really quick. It could have finished after cell 9 of the code.
    * Code: many modules are initialized by importing all the functions from the submodules, as for instance in the /plenoptic/tools/init.py it is written:

              from .data import *
              from .conv import *
              from .signal import *
              from .stats import *
              from .display import *
              from .straightness import *
    
              from .optim import *
              from .external import *
              from .validate import remove_grad
    
              from . import validate
    
          You can use Lasy_loader (https://pypi.org/project/lazy_loader/) to automatize de process.
    
      * If "all synthesis methods will raise a ValueError if given a model with any learnable parameters", then the helper function that you provide should be run automatically when a model is created.
    

For now, that's it! I will look in more detail at the tutorials in the second round, if there is one.

Cheers!

@billbrod
Copy link
Author

Thanks Daniela! Like I said, it'll be a bit before i can dedicate time to most of these, but I wanted to try and solve the jupyter issue. However, I can't reproduce the issue and I'm not sure how to fix this -- @NickleDave or @sneakers-the-rat do you have any ideas?

@sneakers-the-rat
Copy link
Collaborator

Thank you so much to our reviewers for the amount of time and care they've given to this package! There are a lot of very helpful suggestions here, and it seems relatively clear to me which of these are review blockers and which are just tips, so I think we have a clear path forward once @billbrod has some time to address things. Please take your time responding to the issues, and also feel free to pop back in here if anything is unclear.

I took a look at the jupyter issue and think there are a few possible routes there - specify -c conda-forge, reinstalling requests/charset-normalizer, or specifying jupyterlab seem like they would all fit the bill.

I'll check back in a few weeks to see how progress is going, thanks again to the reviewers for all their work <3

@NickleDave
Copy link
Contributor

NickleDave commented Mar 1, 2024

Hi @sneakers-the-rat and @DanielaPamplona, I'm just commenting here for @billbrod that he is on paternity leave, as of last night when his wife went into labor!

So I think we will need to put this review "pending maintainer response" while he's out.
@sneakers-the-rat I will abuse my power as editor and add the label, just to save you the work 😁

edit: I put "on hold" but that's the wrong label. Maybe we shouldn't add a label here--I'm just trying to give context for somebody who wants the overview of where reviews are at. Happy to let the actual editor do whatever they think is appropriate

edit edit: for context, I needed to message Billy about other stuff related to US-RSE, that's how I found out and told him I would comment here since clearly he's busy with other things

@NickleDave NickleDave added on-hold A tag to represent packages on review hold until we figure out a bigger issue associate with review ⌛ pending-maintainer-response and removed on-hold A tag to represent packages on review hold until we figure out a bigger issue associate with review labels Mar 1, 2024
@sneakers-the-rat
Copy link
Collaborator

Thank you for the update :). Congratulations @billbrod and please take the time you need and feel no hurry. We're here to help your work, and that certainly doesn't include rushing you away from your newborn. Best of luck and let us know when you're ready!

@DanielaPamplona
Copy link

Congratulations! Have fun with the baby!

@billbrod
Copy link
Author

Hi all, just an update that I'm back from parental leave and will start working on this again, though my overall capacity is still reduced.

@DanielaPamplona
Copy link

DanielaPamplona commented May 29, 2024 via email

@sneakers-the-rat
Copy link
Collaborator

Fabulous!!! Hope you and the family are well. Just letting you know im on strike and will not be working until the strike is resolved. Whether someone else picking up the review is crossing a picket line is hard to say because this is volunteer work not for my employer, but I wont be proceeding with editorial duties until the University of California resolves its unfair labor practices with UAW 4811.

@DanielaPamplona
Copy link

DanielaPamplona commented Jun 1, 2024 via email

@billbrod
Copy link
Author

billbrod commented Jun 3, 2024

Good luck with the strike!

@lwasser
Copy link
Member

lwasser commented Jun 6, 2024

hi colleagues! Wow reading through this review so far and seeing the incredible level of support that you ALL have for each other really warmed my heart ❤️ 💟 in a special way! Thank you all for supporting life changes, new kiddos (congratulations on the little one @billbrod !), political challenges and standing your ground, and life business - all in a single review. ✨ you all are incredible.

We at pyOpenSci are dedicated to supporting our community and our volunteer editors like the incredible @sneakers-the-rat . SO we are going to look into finding a second editor to support moving this review forward (even if it's slowly @billbrod given home life responsibilities!! ) so Jonny can focus on the important issues happening on their campus now.

Can you all give us a sense of where things are? it looks to me like both reviews are in but @billbrod has some work to do and might be below normal work capacity given more important life responsibilities. I'm just trying to figure out how quickly we want to get a new editor on board! @NickleDave perhaps you can give me a download as it does say the issue is on hold for now. do we want to just pause on this review and check back in a month or two? or should we onboard someone now(ish)?

thank you all!! i appreciate you!

@billbrod
Copy link
Author

billbrod commented Jun 7, 2024

I think we can probably check back in in a month or two -- I'm helping run a workshop in 2 weeks and then I have a presentation at a summer school 2 weeks after that, so all that paired with baby responsibilities means I probably won't be able to focus on the review until the 2nd week or so of July.

@lwasser
Copy link
Member

lwasser commented Jun 7, 2024

fantastic! thanks @billbrod the scipy meeting is in mid july. how about we check back in at the end of july. this will also give us some time to find an editor to wrap things back up here. @kierisi eventually we will have a bot to notify us in x weeks. but for now - this is the review that we could use a second editor for. Let's plan to check back in on it after scipy which would be mid/ late july -- perhaps the 22nd of july?

@kierisi
Copy link

kierisi commented Jun 7, 2024

@lwasser sounds good!

@lwasser lwasser moved this from under-review to on-hold-or-maintainer-unresponsive in peer-review-status Aug 13, 2024
@sneakers-the-rat
Copy link
Collaborator

hi hello checking in here - i'm back at work and can continue editing this review, last i recall we were waiting on some responses to issues, correct?

@DanielaPamplona
Copy link

DanielaPamplona commented Aug 23, 2024 via email

@NickleDave
Copy link
Contributor

I have noticed @billbrod still working on issues -- like getting a version on conda-forge 🚀. Looks like we're down to maybe one pyOS-related issue.

So yes I think we are just waiting for him to finish that up and then reply with "here's how I addressed the reviews"

@billbrod
Copy link
Author

Hi, yes, I'm still working through the issues and reviews. I'll ping the reviewers on the two issues where I need them to try installing / running the tests again, to see if my changes fix the problems they have.

My next steps:

  • I'm about to deprecate the Geodesics class, which isn't nearly as stable/robust as the rest of the library (I'm moving it to another repo to be worked on separately). As part of that, I'm going to deprecate other functions that aren't used.
  • Fixing several small bugs that I discovered when making recent changes.
  • Change how my documentation is built, so that my example notebooks are run as part of the build, in order to make sure they stay up to date with any changes. (Currently, we run all the notebooks as part of the test, but don't run the optimization to completion, to save time, and then the documentation uses cached versions of the notebook that have been run to completion). This will involve moving over to myst-nb and using myst more broadly.
  • Change the documentation theme.

A lot of the feedback in the review was about structure / content of the documentation and the API, which I'll work on once I've done the restructuring above.

We are also currently working on adding linters and making sure they pass

Does this sound like a reasonable plan to everyone? Should I post an intermediate check-in with what I've addressed so far before completing the above steps? Or wait until after I'm done with them?

(I've also moved the repo to a new organization, plenoptic-org, so I just updated my initial comment to reflect that)

@NickleDave
Copy link
Contributor

Should I post an intermediate check-in with what I've addressed so far before completing the above steps? Or wait until after I'm done with them?

I don't think there's a need to do all that as long as we know you're converging. Understood you've got a lot on your plate right now!

It's up to @sneakers-the-rat obvs but sounds like a reasonable plan to me

@sneakers-the-rat
Copy link
Collaborator

As far as I'm concerned, the review timeline is up to you! We don't want to keep the reviewers hanging around forever, but it seems like it would be very reasonable to be responding to raised issues as you go and then pinging back here once you're in a state for everyone to take a look again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: on-hold-or-maintainer-unresponsive
Development

No branches or pull requests

6 participants