-
Notifications
You must be signed in to change notification settings - Fork 9
/
README.Rmd
170 lines (121 loc) · 6.11 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
---
output: github_document
---
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "##",
fig.path = "man/images/"
)
```
```{r echo=FALSE, results="hide", message=FALSE}
library("badger")
```
# stopwords: the R package
[![CRAN Version](https://www.r-pkg.org/badges/version/stopwords)](https://CRAN.R-project.org/package=stopwords)
`r badge_devel("quanteda/stopwords", "royalblue")`
[![R build status](https://github.com/quanteda/stopwords/workflows/R-CMD-check/badge.svg)](https://github.com/quanteda/stopwords/actions)
[![codecov](https://codecov.io/gh/quanteda/stopwords/branch/master/graph/badge.svg)](https://app.codecov.io/gh/quanteda/stopwords)
[![Downloads](https://cranlogs.r-pkg.org/badges/stopwords)](https://CRAN.R-project.org/package=stopwords)
[![Total Downloads](https://cranlogs.r-pkg.org/badges/grand-total/stopwords?color=orange)](https://CRAN.R-project.org/package=stopwords)
R package providing "one-stop shopping" (or should that be "one-shop stopping"?) for stopword lists in R, for multiple languages and sources. No longer should text analysis or NLP packages bake in their own stopword lists or functions, since this package can accommodate them all, and is easily extended.
Created by [David Muhr](https://github.com/davnn), and extended in cooperation with [Kenneth Benoit](https://github.com/kbenoit) and [Kohei Watanabe](https://github.com/koheiw).
## Installation
```{r, eval = FALSE}
# from CRAN
install.packages("stopwords")
# Or get the development version from GitHub:
# install.packages("devtools")
devtools::install_github("quanteda/stopwords")
```
## Usage
```{r}
head(stopwords::stopwords("de", source = "snowball"), 20)
head(stopwords::stopwords("ja", source = "marimo"), 20)
```
For compatibility with the former `quanteda::stopwords()`:
```{r}
head(stopwords::stopwords("german"), 20)
```
Explore sources and languages:
```{r}
# list all sources
stopwords::stopwords_getsources()
# list languages for a specific source
stopwords::stopwords_getlanguages("snowball")
```
## Languages available
The following coverage of languages is currently available, by source. Note that the inclusiveness of the stopword lists will vary by source, and the number of languages covered by a stopword list does not necessarily mean that the source is better than one with more limited coverage. (There may be many reasons to prefer the default "snowball" source over the "stopwords-iso" source, for instance.)
The following languages are currently available:
```{r, echo=FALSE}
dat <- read.csv("data-raw/language.csv", as.is = TRUE, check.names = FALSE)
dat[is.na(dat)] <- ""
dat[dat == "TRUE"] <- "\u2713"
knitr::kable(dat, align = c("l", "l", "c", "c", "c", "c", "l"))
```
## Basic usage
```{r}
head(stopwords::stopwords("de", source = "snowball"), 20)
head(stopwords::stopwords("de", source = "stopwords-iso"), 20)
```
For compatibility with the former `quanteda::stopwords()`:
```{r}
head(stopwords::stopwords("german"), 20)
```
Explore sources and languages:
```{r}
# list all sources
stopwords::stopwords_getsources()
# list languages for a specific source
stopwords::stopwords_getlanguages("snowball")
```
## Modifying stopword lists
It is now possible to edit your own stopword lists, using the interactive editor, with functions from the **quanteda** package (>= v2.02). For instance to edit the English stopword list for the Snowball source:
```{r eval = FALSE}
# edit the English stopwords
my_stopwords <- quanteda::char_edit(stopwords("en", source = "snowball"))
```
To edit stopwords whose underlying structure is a list, such as the "marimo" source, we can use the `list_edit()` function:
```{r eval = FALSE}
# edit the English stopwords
my_stopwordlist <- quanteda::list_edit(stopwords("en", source = "marimo", simplify = FALSE))
```
Finally, it's possible to remove stopwords using pattern matching. The default is the easy-to-use ["glob" style matching](https://en.wikipedia.org/wiki/Glob_(programming)), which is equivalent to fixed matching when no wildcard characters are used. So to remove personal pronouns from the English Snowball word list, for instance, this would work:
```{r}
library("quanteda", warn.conflicts = FALSE)
posspronouns <- stopwords::data_stopwords_marimo$en$pronoun$possessive
posspronouns
stopwords("en", source = "snowball") %>%
head(n = 10)
```
See the difference when we remove them -- "my", "ours", and "your" are gone:
```{r}
stopwords("en", source = "snowball") %>%
head(n = 10) %>%
char_remove(pattern = posspronouns)
```
There is no `char_add()`, since it's just as easy to use `c()` for this, but there is a `char_keep()` for positive selection rather than removal.
## Adding stopwords to your own package
In v2.2, we've removed the function `use_stopwords()` because the dependency on
**usethis** added too many downstream package dependencies, and **stopwords** is
meant to be a lightweight package.
However it is very easy to add a re-export for `stopwords()` to your package by adding this file as `stopwords.R`:
```{r, eval = FALSE}
#' Stopwords
#'
#' @description
#' Return a character vector of stopwords.
#' See \code{stopwords::\link[stopwords:stopwords]{stopwords()}} for details.
#' @usage stopwords(language = "en", source = "snowball")
#' @name stopwords
#' @importFrom stopwords stopwords
#' @export
NULL
```
and add `stopwords` to the list of `Imports:` in your `DESCRIPTION` file.
## Contributing
Additional sources can be defined and contributed by adding new data objects, as follows:
1. **Data object**. Create a named list of characters, in UTF-8 format, consisting of the stopwords for each language. The [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) language code will form the name of the list element, and the values of each element will be the character vector of stopwords for literal matches. The data object should follow the package naming convention, and be called `data_stopwords_newsource`, where `newsource` is replaced by the name of the new source.
2. **Documentation**. The new source should be clearly documented, especially the source from which was taken.
## License
This package as well as the source repositories are licensed under MIT.