Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Formal jupyter_kernel_mgmt support #151

Closed
mcg1969 opened this issue Dec 2, 2019 · 10 comments
Closed

Formal jupyter_kernel_mgmt support #151

mcg1969 opened this issue Dec 2, 2019 · 10 comments

Comments

@mcg1969
Copy link
Collaborator

mcg1969 commented Dec 2, 2019

takluyver/jupyter_kernel_mgmt#32

The CondaKernelProvider support we added to nb_conda_kernels is unfortunately obsolete, but can be remedied! When jupyter_kernel_mgmt 0.5 is complete, we should work to implement this. This issue will organize the plan.

cc: @kevin-bates @echarles @takluyver

@takluyver
Copy link

JKM 0.5 is already out :-)

@mcg1969
Copy link
Collaborator Author

mcg1969 commented Dec 2, 2019

Great!

@kevin-bates
Copy link

In taking a brief look and applying various code changes, I'm seeing a number of issues that my python skills don't know how to handle.

  1. Now that providers are no longer in jupyter_client (JC) and jupyter_kernel_mgmt (JKM) doesn't support traitlets, the derivation of CondaKernelSpecManager from KernelSpecManager gets messed up. JKM's KernelSpecManager has a specific set of parameters, but they aren't mapped to traitlets. As a result, CondaKernelSpecManager's call to super.__init__ generates an invalid parameter exception - when the base class comes from JKM.

  2. Today, nb_conda_kernels simply looks for the discovery module via import and, if not present (in jupyter_client), derives the CondaKernelProvider from object in order to satisfy loads knowing the find_kernels() and launch() methods won't be called. Since both JC and JKM can co-exist, how should that be handled?

  3. JKM requires asyncio, which doesn't exist in 2.7 and JKM doesn't currently run on 3.5 (we can change that, but its nearly EOL). How would conditional requirements be handled?

Given these issues, would it make sense to remove discovery from nb_conda_kernels and, instead, develop a separate package - dependent on nb_conda_kernels (for the conda-speficic code) - that is provider-specific (e.g., nb_conda_provider or nb_conda_kernel_provider)? I believe this would allow for future changes down the road (e.g., parameterized kernel launch) that would be difficult to make work in a shared module.

Thoughts?

cc: @Zsailer

@mcg1969
Copy link
Collaborator Author

mcg1969 commented Dec 2, 2019

Today, nb_conda_kernels simply looks for the discovery module via import and, if not present (in jupyter_client), derives the CondaKernelProvider from object in order to satisfy loads knowing the find_kernels() and launch() methods won't be called. Since both JC and JKM can co-exist, how should that be handled?

This is only being done in order to facilitate testing while we didn't have a real opportunity to exercise it.

JKM requires asyncio, which doesn't exist in 2.7 and JKM doesn't currently run on 3.5 (we can change that, but its nearly EOL). How would conditional requirements be handled?

My thought is that jupyter_kernel_manager will be an optional dependency of nb_conda_kernels. If a conditional import of JKM succeeds, it will behave accordingly; otherwise it will fall back to older behavior. This will likely solve issue #1 too.

@kevin-bates
Copy link

Thanks for the responses. I'm realizing that I may have gotten wrapped around the axle a bit. Since the purpose of providers is to allow custom implementations of BOTH KernelSpecManager and KernelManager, there's nothing that prevents CondaKernelSpecManager from deriving from jupyter_client's KernelSpecManager.

Also, the launch method will return back an instance of KernelManager from jupyter_kernel_mgmt, not jupyter_client, but again, that should be fine.

So installation of nb_conda_kernels will wind up introducing jupyter_client as a dependency that wouldn't be necessary otherwise - but that's no different than some other provider depending on another module.

Any hints on how to solve item 3? Could py 2.7 be dropped via a new release (2.3 or 3.0)?

@mcg1969
Copy link
Collaborator Author

mcg1969 commented Dec 2, 2019

I'm not concerned at all about 3. We will be able to support Python 2.7 with less functionality. Remember, if nb_conda_kernels is in Python 2.7 environment, it won't be able to load JKM. So we will fall back to the existing code path, and there will be no undue dependency on asyncio.

@kevin-bates
Copy link

ok, I thought the new declarations would generate syntax errors. I apparently don't understand how/when python "compiles" its code.

discovery.py would look something like this. Aren't the function decorators (async.coroutine and async) and await still an issue?

try:
    import asyncio
    from jupyter_kernel_mgmt.discovery import KernelProviderBase
    from jupyter_kernel_mgmt.subproc import SubprocessKernelLauncher
except ImportError:
    # Silently fail for version of jupyter_client that do not
    # yet have the discovery module. This allows us to do some
    # simple testing of this code even with jupyter_client<6
    KernelProviderBase = object

from .manager import CondaKernelSpecManager


class CondaKernelProvider(KernelProviderBase):
    id = 'conda'

    def __init__(self):
        self.cksm = CondaKernelSpecManager(conda_only=True)

    @asyncio.coroutine
    def find_kernels(self):
        for name, data in self.cksm.get_all_specs().items():
            yield name, data['spec']

    async def launch(self, name, cwd=None, launch_params=None):
        spec = self.cksm.get_kernel_spec(name)
        return await SubprocessKernelLauncher(
            kernel_cmd=spec.argv, extra_env=spec.env, cwd=cwd, launch_params=launch_params).launch()

@mcg1969
Copy link
Collaborator Author

mcg1969 commented Dec 2, 2019

Ah, yes, you're right, they would indeed cause errors. We'll have to have separate async and non-async submodules.

@kevin-bates
Copy link

@mcg1969 - Your comment implies that there's a different "load pattern" that takes place when a module is in a sub-directory (which is what I assume a 'submodule' is). If that's the case, then do we need discovery.py at all if the entry_points' stanza in setup.py is updated to simply hit nb_conda_kernels.subproc.provider directly - where CondaKernelProvider is defined?

    entry_points={
        "jupyter_kernel_mgmt.kernel_type_providers": [
            # The name before the '=' should match the id attribute
            'conda = nb_conda_kernels.subproc.provider:CondaKernelProvider',
        ]}

If both an async and a dummy version (dummy because it will never be executed) are necessary, don't we wind up pushing the problem down a level because there's really nothing other than import statements in discovery.py at that point. Not following the purpose of discovery.py or non-async at this point.

I apologize for being behind here, but we're getting into python idiosyncrasies that I haven't had to really deal with before. 😄 thank you for your patience.

@echarles
Copy link

echarles commented Jun 30, 2023

This issue should be closed in favor of #228

@mcg1969 mcg1969 closed this as completed Mar 6, 2024
@mcg1969 mcg1969 closed this as not planned Won't fix, can't repro, duplicate, stale Mar 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants