Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project Rating #991

Closed
williamjmorenor opened this issue Feb 29, 2016 · 41 comments
Closed

Project Rating #991

williamjmorenor opened this issue Feb 29, 2016 · 41 comments
Labels
feature request UX/UI design, user experience, user interface

Comments

@williamjmorenor
Copy link

Looking at the warehouse interface shuold be posible to show some rating, maybe a stars based system.

@edmorley edmorley mentioned this issue Mar 2, 2016
@demianbrecht
Copy link
Contributor

+1 to this feature. The only current measure of any sort is the number of downloads which is pretty much meaningless. This would at least give us a mechanism to have some user-driven feedback.

@nlhkabu
Copy link
Contributor

nlhkabu commented Mar 6, 2016

I agree that it would be good to have some kind of public measure of popularity (particularly for search ordering - see #702), but I think we need to be very careful here. We need to particularly think about the impact a poor rating might have on a package author. Is the review fair and accurate? What recourse do they have to respond?

Importantly, is a rating system going to dissuade newcomers from publishing their packages because of fear of public criticism?

My alternative idea is to mimic the star system provided by GitHub. This would give each package some kind of popularity count, while also allowing package consumers to keep track of their favourite packages. It also carries no 'negative' feedback mechanism, which is good for package authors :)

@demianbrecht
Copy link
Contributor

Good point about negative feedback. However, I'd argue that the number stars would be akin to download count and doesn't really provide any real benefit.

Thinking about this a little more, the problem with reviews is that they're subjective by nature, so accuracy will always be arguable. Perhaps rather than stars, we could have a well defined quality matrix that can be represented visually in a variety of ways. Simple items could be marked off, for example:

Does the project have:

  • Some form of CI setup (i.e. Travis-CI et al), although this one can be tricky for pull CI systems
  • Some form of static analysis
  • Unit tests

I'm not entirely sold on this yet, just thinking out loud really..

@xavfernandez
Copy link
Contributor

This could also be based on the fact that a bunch of other well-rated packages depends (via install_requires) on this particular package.

@dstufft
Copy link
Member

dstufft commented Mar 6, 2016

One thing to think about here is that in the past, PyPI did have a review and star rating system. Package authors got really upset about it and fought tooth and nail to get it removed (which it ultimately did get removed). That was before I was involved in the Python community at all, so I don't know specifics about why people were really against it.

I know that @jacobian was one of the vocal opponents to the system as it existed back then, I'm not entirely sure if the main problem with it was the implementation or the entire idea. He might be willing to come in here and talk a bit about it, but there was a fairly long thread on catalog-sig about it here. Talking to him briefly, he said:

my argument at the time -- which I still agree with -- is that ratings come down to a popularity contest, which is unfun

That's not to say "welp we tried it once, it's never going to happen again", just that if if we do it, we need to be very careful about how we do it so as not to encounter the same issues. We have to carefully balance features for people looking to use packages and features for people who are publishing packages, if we skew too far in either direction then we're likely to lose the other group of users, to the detriment of the ecosystem as a whole.

@jacobian
Copy link

jacobian commented Mar 6, 2016

I'm still strongly against ratings/reviews, for the reasons I tried to articulate in that thread. Voting and popularity contests have no place in a neutral catalog.

To what I said in 2011, I'll add: the web in 2016 is, if anything, a far less "friendly" place than it was even five years ago. We've seen over and over again that system that allows commenting, rating, voting, and the like is rife for abuse. Yes, I'm talking about gmergte and the like. I beg of you: if Warehouse must contain some sort of voting, commenting, or other system, please consider the potential for abuse and consider flagging, moderation, anti-brigiading, and other defense mechanisms as pre-conditions to launch. For more on this line of thinking, please read Anil Dash's Against "Don't Read The Comments" -- which should be required reading for anyone building social features into websites these days.

@jacobian
Copy link

jacobian commented Mar 6, 2016

Thinking further, I think the point I'm trying to make comes down to this:

There's nothing inherently "wrong" or "right" about commenting/voting/etc features. But slapping them onto a site without considering the social implications is wrong. If Warehouse is going to gain these features, please consider them and their ramifications very carefully.

@demianbrecht
Copy link
Contributor

I agree that having a user-driven rating system is likely something that wouldn't be well placed in Warehouse. What I do think would be valuable is some kind of a system such as django packages has, where a set of attributes are provided when browsing packages (I believe @ncoghlan had mentioned something akin to this some time ago as well). Things like commit history could be valuable during evaluation stages to help get a gauge on community activity. Other attributes such as existence of tests, automated builds, etc can help speak to level of effort around quality. These are all attributes that both are outside of the influence of the public (removing that "negative" feeling and popularity contest), but still speak to quality, which is something that's entirely missing from current PyPI.

@jacobian, @dstufft, @nlhkabu: Any objection to perhaps putting a list of attributes that we may want to highlight for a given project in an entirely automated way? Then we could look at ways that we could go about building that data set and visualizing it to the user.

@ncoghlan
Copy link
Contributor

ncoghlan commented Mar 7, 2016

(This rambles a lot, since better understanding this area is basically my day job, but it's also a huge field with more open questions than answers at this point in time. But my short answer is "Given PyPI's central role in the ecosystem, I think it's wrong to measure and report things beyond the core metrics that only PyPI can provide, without first validating that those measures are actually a useful predictor of pragmatic software quality")

Caveat: I'm heavily biased here, as I think upstream package repositories should act as neutral data sources for independent content curation efforts (e.g. by exposing download counts and links to version control systems and issue trackers, as PyPI does, and dependency counts, which PyPI doesn't do yet, but I hope will be able to do eventually as the Python packaging ecosystem's metadata management improves).

The main benefit of this approach specifically for the PyPI development team is that it makes it possible to run the repository with as light a hand as possible - community management is hard work, and time consuming enough when you're just wrangling tooling developers and trying to get basic security features right, let alone if you're actively trying to facilitate the formation of a full-blown direct-from-producer-to-consumer market system - any metric added that can be interpreted as a measure of "software quality" or "community quality" risks drawing the ire of developers of packages that score badly on that metric, and PyPI is unique in the ecosystem for owning the package publication experience (whereas it is relatively much easier for third parties to offer alternate download experiences, as with tools like peep and pipsi, or redistributors that use different formats, like Linux packaging, conda, PyPM, Canopy, etc). If a third party rates the contents of PyPI, then there's no software publisher/community publication platform conflict there - exposing your software to being both used and rated by others is a natural consequence of publishing it as open source. However, if PyPI rates it in some way (beyond reporting objective statistics like download rates), then that impacts the publisher/platform relationship in a way that third party rating systems don't.

The PyPI ToS thus aims to ensure that everyone running "pip install " is always legally entitled to download and deploy that component, but doesn't make any promises about whether or not that component is worth running (or even safe to run, as "pip install python-nation" illustrates).

The potential benefit for the broader Python ecosystem of PyPI staying out of the "software rating" game is that leaving these features out of the base platform facilitates the creation of a competitive market for content curation tools. The oldest of those are the Linux distros, albeit being rather out of favour with many developers due to the relatively high latency in delivering new feature releases to end users, together with cross-platform redistributors like ActiveState, Enthought and Continuum Analytics. In the enterprise sector, vendors like Black Duck offer advanced language independent content scanning capabilities, while newer providers like requires.io and versioneye.com offer more specifically security focused capabilities. DjangoPackages also stands out as an example of a subcommunity successfully coming together to address a shared interest in assessing packages within the context of a particular domain of development (Django-based web services) without reliance on a centralised vendor handling the bulk of the curation effort.

The main technical problem with attempting to highlight additional metrics through PyPI itself at this point in time is that beyond the basics ("using dependencies with known CVEs is a bad idea", "using dependencies with licenses that are incompatible with your project is a bad idea"), we, as an industry, still don't really know what good predictors are for "open source components that will save you more time than they cost you" - searching for open source software on the internet remains an art rather than a science.

There are a bunch of indicators we use, but other than actually going and reading documentation and reviews, most of them are actually measuring "if you choose poorly, at least you won't go down alone" factors like download rates and commit volumes and adoption by high profile individuals and organisations, rather than "is this component likely to be a good fit for my specific use case?".

So I think the way to go about this is precisely the way DjangoPackages went about it: find a community with a shared interest in content curation, and build a curation system aimed at meeting the needs of that community. If a particular approach to package assessment proves to generalise effectively (which, as it turned out, DjangoPackages didn't - attempts to create similar sites for other Python web frameworks fizzled out, even though the necessary work was done to separate the underlying comparison framework out from the site specific DjangoComparisons code), then it may make sense to propose it for standardisation upstream on PyPI itself. However, I think we're quite some way away from anyone having the evidence needed to back up pitching any particular measures as useful predictors.

@dstufft
Copy link
Member

dstufft commented Mar 7, 2016

It's late and I haven't fully read this. I just wanted to correct (or clarify) the PyPI ToS since its been a somewhat contentious point among some community members. It does nothing to ensure that someone is legally allowed to modify or eve use the software on PyPI. The only right it ensures is a right to distribute for PyPI and all users of PyPI. Determining if it's legal to use a particular project is an exercise left to the reader (though the common case is certainly OSS).

Sent from my iPhone

On Mar 7, 2016, at 12:59 AM, ncoghlan notifications@github.com wrote:

The PyPI ToS thus aims to ensure that everyone running "pip install " is always legally entitled to download and deploy that component, but doesn't make any promises about whether or not that component is worth running (or even safe to run, as "pip install python-nation" illustrates).

@ncoghlan
Copy link
Contributor

ncoghlan commented Mar 7, 2016

Good catch - I'd forgotten that "run" wasn't listed as one of the requirements for upload. (I knew "modify" wasn't listed, as that's how the ToS avoids conflicting with the GPL and other copyleft licenses).

@williamjmorenor
Copy link
Author

I really like the idea of @nlhkabu to use something similar to github, this way developers can show their preference for a particular library and there is no way that a library have bad karma or even a negative score, plus a good number of developers make use of github is a system that is familiar.

@williamjmorenor
Copy link
Author

A list of dependant projects can be another usefull mesure, many of use make indirect use of librarles, for example many users can start and use the babel library but not start the pytz, showing the dependants of pytz can give credits to its developers

@demianbrecht
Copy link
Contributor

Thanks for the tremendously thoughtful response @ncoghlan.

One of my motivations behind wanting a feature like this is due to personal frustrations I've had during package evaluations, especially when doing so within a timebox when working on projects at work. Adding a mechanism to provide warehouse users with additional data around package quality and community activity seemed like big win to me.

That said, I think you're absolutely spot on with it being an art and not a science and until curation has been tried, tested, iterated on to a point where it's globally accepted by the Python community at large, it doesn't belong in warehouse (if even then based on your other points).

After reading and giving this a whole lot more thought, I think I've come to the conclusion (FWIW) that I'm -1 on this feature (quality matrix and Github star-like system alike). At least, until something has been proven out of band with warehouse at which point it likely be worthwhile to reevaluate (but not be a slam dunk).

@nlhkabu
Copy link
Contributor

nlhkabu commented Mar 7, 2016

I'm -1 on this feature (quality matrix and Github star-like system alike)

I agree. As you mentioned, the GitHub star system is really not that different to download data, so I am happy to strike that idea off the list.

As for other community ratings, I agree wholeheartedly with @jacobian. One of my design objectives is to make the site friendly and usable for all Python community members - it seems a rating system would only undermine this.

Something stood out from @ncoghlan's post:

any metric added that can be interpreted as a measure of "software quality" or "community quality" risks drawing the ire of developers of packages that score badly on that metric

We currently have badges (for version, dependencies, test coverage, test status) on each of the package pages - I wonder if these are also out of the scope of what PyPI should provide? Comments on #786 would be appreciated.

@dstufft
Copy link
Member

dstufft commented Mar 7, 2016

I've been thinking about this all morning and talking it over with folks and I here is what I'm at right now.

I believe that a comment system requires a commitment to moderate and police the comments made on that. While you can provide tools to try and crowdsource that to some degree, at the end of the day, if we gain a comment system we're responsible for ensuring that the content of those comments contain the kind of content we want to see on PyPI. I have serious doubts to our ability to actually effectively moderate that, particularly given the handful of people with power to do that and what their schedules look like already in terms of time.

I also believe that a review system for a software project is a fundamentally harder thing to do than for your typical product on say, Amazon. There are only so many (legitimate) ways that you can use a bowl you bought from Amazon, and there are a fairly finite set of things you can judge it based on. However a project like Django, there is a much more diverse set of use cases and things you can judge it on. This makes it hard to even write a review for it (in fact, I can probably count on one hand the number of critiques for software I've read in my life that weren't seriously flawed in some way).

I believe that GitHub like star ratings are likely to provide little to no real information beyond that of download counts and will likely contribute to having people feel like publishing packages to PyPI comes with a built in popularity contest (moreso than is inherent in the nature of competing software projects). I think it's likely that it will provide even less information than we can currently get from download counts, because at the very least a download generally represents some action taken by someone (baring mirroring and fraud) whereas all a star really means is that you convinced someone to click a button on a website once. There's no real investment or statement that the star really means much, if anything.

As far as trying to provide some sort of automated comparison of quality for a project, I think it's hard to do that really. There are very few (if any) things that I can think of that we can determine automatically with a low enough false positive rate that really speaks in a meaningful way as to the "quality" of a project. That number gets even less so when you remove things which are otherwise self evident (e.g. things like "has a description", it's fairly obvious from a project detail page if it has a description already or not, a badge doesn't add much). That doesn't mean that I think it's impossible to do something like this, just that I think that it would be hard to find many things to include in this matrix.

I do agree with @ncoghlan that we should focus on providing data, particularly data that is difficult or impossible for someone other than PyPI to provide (although I disagree with the characterization of the data as objective- what data you choose to show in the UI and how you show it fundamentally is about choosing what story we want to show with the data, and is a subjective statement on what we think is important). Which I think segues nicely into a thing that I think we could do, which is provide functionality similar to GitHub "watchers", allow people to "watch" a project which would subscribe them to all of the notifications for new releases and such for that project, and then expose the number of watchers. This is a bit more meaningful than just stars, since there's some level of investment by the people doing the watching (they're asking to get notifications/emails/etc based on activity on the project) which keeps people from just watching tons of projects (else they end up getting "spammed" by notifications). It does still have a bit of a popularity contest feel to it, but I think it's similar in vein to the download counts.

@dstufft
Copy link
Member

dstufft commented Mar 7, 2016

Another nice thing about watchers (versus stars or download counts) is they are not just a measure of usage, but also a measure of investment and degree of import.

@ncoghlan
Copy link
Contributor

ncoghlan commented Mar 8, 2016

I admit when I was talking about data collected by PyPI being objective, I was thinking in terms of an API for other (opinionated) services to use to retrieve that data, rather than decisions on how to present the information in the default web UI.

In terms of "watching" projects, I'm less sure of the value beyond the existing download counts, since it's already possible to watch packages even on current PyPI by registering them with https://release-monitoring.org/ (Debian and openSUSE also have their own language independent techniques for upstream monitoring)

That said, those other techniques are generally polling based (and hence may take a while to notice new releases), while an asynchronous event stream from PyPI would allow new releases to be picked up almost immediately.

@dstufft
Copy link
Member

dstufft commented Mar 8, 2016

I think a possible problem with download counts as the sole metric is that it ignores a number of things:

  • It treats all dependencies of a project with the same importance, but I think that fails to account for the fact that your "main" dependency, like say Django, is likely to be something you care about more than a tertiary dependency, like say psycopg2 (for a Django web app).
  • The download counts can only represent usage that comes from "typical" dependence. This means that a project like, say packaging or versioneer are likely to be underrepresented since one of their primary methods of use is by bundling.
  • The download counts can only represent usage via PyPI, but it ignores usage through a redistributor. While it's not unusual for redistributors to offer their own metrics, having this as a separate metric goes back to the fact it's a bit different than pure download (or installed) counts, it would infer a stronger "dependence" than simply a download or install.

I was always planning on adding the feature of watching a package (partially tracked in #997 and #798) but I'm not sure if exposing the number of watchers a project has is a good idea or not (though I lean towards yes).

@ncoghlan
Copy link
Contributor

ncoghlan commented Mar 8, 2016

Yeah, a watchers feature is probably worthwhile - I came around to that point of view when I checked Anitya and noticed it works by way of daily polling for new versions.

As a possible separate option from watchers (and keeping in mind I'm talking about changes after the PyPI replacement milestone), it might be interesting to have the notion of "registered redistributors", where downstream redistributors can post back to PyPI to say "we redistribute this".

I'm not actually sure that's a good idea (mainly because we'd need some kind of review process to avoid link spam), but I can see potential benefits for both package maintainers (as they can see at a glance who is redistributing their software), and for redistributors (they get a discovery path from the upstream releases to their downstream counterparts).

@demianbrecht
Copy link
Contributor

demianbrecht commented Mar 8, 2016 via email

@dstufft
Copy link
Member

dstufft commented Mar 8, 2016

@demianbrecht To play a bit of an advocate (though I'm still on the fence personally) how does this differ from download counts? The accuracy of that varies wildly from projects which are only available for download on PyPI, projects which are available on PyPI and on their own systems, and still other projects which are only available to download from their own systems. Then add in the pip download cache, the pip wheel cache, bandersnatch mirrors, and redistributors like Debian, Fedora, Anaconda, etc. I'm not sure that the value of exposing that number would be any more (or less) misleading than the current numbers we expose. Maybe that's an argument for removing download counts (although when we removed them briefly in 2013 when we first switched to Fastly people complained loudly), maybe it's an argument that one imperfect metric doesn't excuse another one (though I would argue that there are no perfect metrics, just flawed ones that require interpretation to be fully meaningful).

@demianbrecht
Copy link
Contributor

@dstufft heh you're absolutely right, and I'd be in favor of removing download counts as well. That said, I also remember the backlash of 2013, so I'm fine with them being grandfathered in. You're also right in that one imperfect metric doesn't excuse another.

To be clear: I'm not horribly opposed to the feature in of itself. What it really boils down to what is the vision for PyPI/Warehouse? Is it to be a repository, simply providing access and impartial facts about the projects that it hosts, or is it to also provide user-driven color commentary on the hosted projects and to provide Yet Another community resource? If it's the former, then I think that all extraneous, user-driven features should be nixed for warehouse. If it's the latter, then features like watcher count and such makes more sense.

Personally, I'd like to apply the Unix philosophy of doing one thing and doing it well: A repository. Other community-related features should be nixed from Warehouse and offloaded to other, external applications.

This isn't to say that there shouldn't be community tie-ins (a la Github webhooks and services) within PyPI. I think adding functionality around that would provide much richer user experience than attempting to rebuild wheels within PyPI, especially where the value it's adding may contradict the overall vision of what PyPI/Warehouse should be.

That all said, I'm brand spankin' new to working on PyPI/Warehouse development so I may be entirely off my face with all of this :)

@ncoghlan
Copy link
Contributor

ncoghlan commented Mar 9, 2016

@demianbrecht I think you have the right kinds of problems and perspectives in mind to try and balance :)

When it comes to PyPI & metrics, I think there are 3 key aspects to keep in mind:

  1. Data collection (in many cases, only PyPI can do this, since it's the common publishing platform that seeds the rest of the Python package distribution ecosystem)
  2. Data publication via an API for use by 3rd party review services (again, only PyPI can do this for cases where PyPI holds the raw data)
  3. Data publication in the default web UI (here we have the option of deferring to 3rd party review services like djangopackages.com)

I think there are several areas where 1 & 2 will make sense, with PyPI acting as a clearing house for metrics just as it does for actual software distribution. I see that falling into three categories:

  • metrics about client behaviour (download counts, watcher counts - perhaps categorised in some way to improve data granularity, such as separating out download counts for known mirroring clients like bandersnatch, or attempting to report unique downloading IPs rather than raw download counts)
  • opt-in metrics supplied by publishers (such as the suggested test status and test coverage metrics)
  • analysis of the dependency graph between packages (this one can be handled by 3rd parties that mirror the whole of PyPI, so doing it in PyPI itself would be an efficiency measure that lowers the barriers to making use of the dependency graph details, rather than something that third parties inherently can't do on their own)

For the 3rd aspect, my own main interest would be in nudging users in the direction of a "risk management" approach to consuming open source software, and in that context, the merit of high download counts and high watcher counts is "If this was actively malicious, it wouldn't be so popular" (the open source equivalent of "nobody ever got fired for buying <well known supplier>"), while the benefit of test status and test coverage information is that tested software with good test coverage is more likely to work as advertised.

@ismail-s
Copy link

FWIW, in the haskell community, hackage allows you to vote for packages and stackage has disqus comments. Neither seem to be used much from what I've seen, although the haskell community is much smaller than the python one.

@stared
Copy link

stared commented May 16, 2016

I am strongly for GitHub stars and the number of downloads (ideally total and e.g. this week/month). As on now it is hard to distinguish between actively developed and wildly used packages and ones abandoned years ago. Right now when I search in PyPI it's like searching a dumpster, with rare pearls among dead packages.

Examples where it works:

screen shot 2016-05-16 at 2 25 52 pm

screen shot 2016-05-16 at 2 27 54 pm

I would rather use already existing stats than trying to add new voting system. (Why adding complexity?)

@ncoghlan
Copy link
Contributor

When it comes to finding useful Python software, searching on Google with "python" appended as an additional search term is generally going to give much better results than searching PyPI directly, and that's always going to be true regardless of what happens on the PyPI side of things (the general purpose search engines have inordinately more information sources to draw from when ordering results, and "incoming links to the project documentation or download page" is actually a pretty decent measure of adoption).

In terms of metrics reported by PyPI itself, there's zero chance of reporting a metric maintained by some other system (as with the reference to GitHub stars themselves, rather than a star-like system). The most we'll provide is a machine readable link to the relevant data source (such as a project's source code hosting service), wherever that may be, and even that would be dependent on project maintainers filling it out in their project metadata.

@dstufft
Copy link
Member

dstufft commented May 16, 2016

Right, a key problem with using GitHub stars is that not every project is on GitHub (although a lot of them are), and even if they were it seems like a bad idea to bake that assumption into PyPI when it's entirely within the realm of reasonable possibilities that in a few years time GitHub goes the way of SourceForge. Overall across everything we do here, we try to avoid tying ourselves to external systems in ways that can't be easily replaced. For instance, using the Fastly CDN? That's fine because we can switch to Cloudflare or Akamai or doing it ourselves without any impact on end users. Forcing everyone to log into the site using Github? That's bad because we can't swap it out without end users being affected.

I'm perfectly fine with measures of popularity, as long as they're meaningful measures. Number of stars is pure popularity contest without much meaning behind it, since all it really suggests is that you convinced someone to click a button- and I think people attempting to game that system. I think that the best measures of popularity inherently stem from taking some action that people can do for it's own purpose, and exposing the number of people who take that action.

@nlhkabu nlhkabu added the requires triaging maintainers need to do initial inspection of issue label Jul 2, 2016
@di di removed the requires triaging maintainers need to do initial inspection of issue label Dec 7, 2017
@brainwane brainwane added the UX/UI design, user experience, user interface label Mar 20, 2018
@brainwane
Copy link
Contributor

A little context on why I'm weighing in, and then my take.

Context: I'm the project manager for Warehouse, via the Mozilla Open Source Support Program grant, to replace the legacy PyPI site. The new site is about to hit beta and we'll be announcing that and asking folks to test it. Our developer roadmap focuses on the essential things we need to do to replace pypi.python.org within the next few months. Once that's done, we can work on longer-term features (such as new APIs). I'm also running a sprint at PyCon in May and happy to discuss this there.

And for basic and advanced PyPI download usage statistics, we now provide BigQuery support, although this is not discoverable through PyPI project detail pages and, for reasons mentioned above, perhaps should not be.

My assessment: In the past two years, Libraries.io has become a provider of PyPI statistics for projects, including GitHub stars and forks, dependency tracking, and other relevant factors. It's backed by Tidelift but it's open source. It has an API we could hook into. (See #3252 for some more thoughts on that.) @andrew is working on major improvements to their ranking metrics, and has further metrics work in their backlog.

Therefore, I think we should strongly consider referring users to libraries.io for project metrics and rankings, and submit future issues there to help them improve, rather than implement any project ratings primarily in PyPI. And then we can decide, based on what they have available, which of their stats to display on PyPI project detail pages.

@brainwane brainwane added this to the Cool but not urgent milestone Mar 20, 2018
@demianbrecht
Copy link
Contributor

@brainwane +1, I like the idea of using libraries.io as they're giving far more thought to meaningful metrics than I suppose anyone would ever want to put directly into pypi.

@ncoghlan
Copy link
Contributor

Lending PyPI's weight to Tidelift's efforts to come up with a language independent ranking approach that encourages and rewards good community behaviours seems like a good direction to me.

@williamjmorenor
Copy link
Author

williamjmorenor commented Mar 21, 2018 via email

@brainwane
Copy link
Contributor

People who want a watching/watchers feature: you can log in to libraries.io via GitLab, GitHub, or BitBucket to watch packages and get email notifications of new releases + an onsite newsfeed. It can also watch your repositories & notify you of changes in your dependencies, though it doesn't yet support some dependency manifests like tox.ini. Surfacing the number of people subscribed for a package is in their backlog.

I linked to this thread on distutils-sig and will leave this issue open for a few more days to let more people comment, but I'm leaning towards closing it and opening:

  • an issue to link to the libraries.io page for a project on each Warehouse project detail page
  • a PR to add an FAQ entry about project stats, linking to BigQuery guidance and libraries.io

brainwane added a commit that referenced this issue Mar 24, 2018
Addresses the half of #991.
brainwane added a commit that referenced this issue Mar 25, 2018
Addresses the user help half of #991.
brainwane added a commit that referenced this issue Mar 25, 2018
Addresses the user help half of #991.
@andrew
Copy link

andrew commented Mar 29, 2018

Sorry for the delay, was mostly offline for the Bath Ruby conference last week. If there's anything else I can do to make it easier to get statistics out of Libraries.io for you to use them do let me know.

SourceRank 2.0 development is now underway in librariesio/libraries.io#2056, would love to hear any feedback/thoughts on the details, especially how it works within the python ecosystem.

Would some kind of badge or embedable widget be a lightweight way of adding more info to each page?

@brainwane
Copy link
Contributor

Now that #3400 and #3434 are merged, I'd like to close this issue and thank everyone for participating in the discussion.

Thanks for the link and invitation, @andrew. Can you point to a demo or any screenshots? And is there an existing issue somewhere in the libraries.io set of repositories about badge-y things, or a repo where such discussion would be best?

@andrew
Copy link

andrew commented Apr 3, 2018

@brainwane there aren't any badges/embeds in Libraries.io at the moment, I've opened an issue up on Libraries.io: librariesio/libraries.io#2077

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request UX/UI design, user experience, user interface
Projects
None yet
Development

No branches or pull requests