You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Note: This is not real Python code. I can't reproduce these problems with toy data, but they# show up consistently with certain sets of real data.frompandasimportSeriesvocabulary=Series(data=<afewthousandintegers>, index=<afewthousandwords/tokens, includingu"null">)
new_vocab=Series(data=<asmallerlistofintegers>, index=<asmallerlistofwords, mostlyoverlappingwiththefirstone, alsoincludingu"null">)
new_words=new_vocab.index.difference(vocabulary.index)
printu"null"isinvocabulary.indexprintu"null"isinnew_vocab.indexprintu"null"isinnew_words>True>True>TruefrompandasimportDataFramematrix1=DataFrame(data=<abunchofnumbers>, index=<listoftopicnames>, columns<listofwords/tokens, _not_includingNaNoru"nan">)
matrix2=DataFrame(data=<abunchofnumbers>, index=<listoftopicnames>, columns<listofwords/tokens, _not_includingNaNoru"nan">)
matrix3=matrix1.multiply(matrix2)
fromnumpyimportNaNprintNaNinmatrix1.columnsprintu"nan"inmatrix1.columnsprintNaNinmatrix2.columnsprintu"nan"inmatrix2.columnsprintNaNinmatrix3.columnsprintu"nan"inmatrix3.columns>False>False>False>False>True>True
I feel bad about reporting these bugs, because I can't duplicate them outside actual scripts with actual (rather voluminous) data. The first common denominator to all the issues is that when I perform operations involving two objects, I end up with extra columns (in DataFrames) or elements (in Series or Indexes): sometimes duplicates of existing columns (but the second, duplicate column will contain all null values), sometimes entirely new columns/elements.
The second common denominator is that every case seems to involve "null-like" index values: NaN, u"nan", u"null", u"n/a", and u"null.1.1" are the ones I've actually observed. In one case, u"null" creates a problem; in another u"null" works just fine, while u"n/a" and u"null.1.1" create problems (but u"null.1" doesn't). In a third case, the null-like values don't exist in the original objects, but show up after an operation, as new columns (interestingly, in this case, under pandas 20.3 I got two columns, NaN, and u"nan", while in pandas 22.0, I get those two plus u"n/a"--or actually, the "len" function is telling me there are four extra columns, but I using a "difference" method only gives me a list of three)--and they show up with some sets of data but not others.
In each case where errors have shown up, I've added code to check for them and remove the spurious columns/elements, but it's been very frustrating to run down different problem that can occur.
Output of pd.show_versions()
[paste the output of pd.show_versions() here below this line]
INSTALLED VERSIONS
commit: None
python: 2.7.14.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en
LOCALE: None.None
I'm not familiar with Python 3, but I know there's a conversion tool, so maybe I can try that when I get a chance.
I can report that I got rid of most (maybe all?) of the problems by changing two arguments in read_csv: I set "na_values" to "''" and "keep_default_na" to False. Apparently one factor leading to the issues in this particular case was that the "na" settings in read_csv affect not only the values in a csv but also the index/column labels. When doing natural language processing and using tokens (words) as DataFrame and Series labels, it's pretty much inevitable that the default null values will come up, unless you explicitly exclude them from your vocabulary (and there are good reasons not do do so). The current behavior of read_csv is not nonsensical, but it could be better documented, and I think there's a good case for having different args for the labels and the values.
Code Sample, a copy-pastable example if possible
Problem description
This may be related to #18988.
I feel bad about reporting these bugs, because I can't duplicate them outside actual scripts with actual (rather voluminous) data. The first common denominator to all the issues is that when I perform operations involving two objects, I end up with extra columns (in DataFrames) or elements (in Series or Indexes): sometimes duplicates of existing columns (but the second, duplicate column will contain all null values), sometimes entirely new columns/elements.
The second common denominator is that every case seems to involve "null-like" index values: NaN, u"nan", u"null", u"n/a", and u"null.1.1" are the ones I've actually observed. In one case, u"null" creates a problem; in another u"null" works just fine, while u"n/a" and u"null.1.1" create problems (but u"null.1" doesn't). In a third case, the null-like values don't exist in the original objects, but show up after an operation, as new columns (interestingly, in this case, under pandas 20.3 I got two columns, NaN, and u"nan", while in pandas 22.0, I get those two plus u"n/a"--or actually, the "len" function is telling me there are four extra columns, but I using a "difference" method only gives me a list of three)--and they show up with some sets of data but not others.
In each case where errors have shown up, I've added code to check for them and remove the spurious columns/elements, but it's been very frustrating to run down different problem that can occur.
Output of
pd.show_versions()
[paste the output of
pd.show_versions()
here below this line]INSTALLED VERSIONS
commit: None
python: 2.7.14.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en
LOCALE: None.None
pandas: 0.20.3
pytest: 3.2.1
pip: 9.0.1
setuptools: 38.4.0
Cython: 0.26.1
numpy: 1.13.3
scipy: 1.0.0
xarray: None
IPython: 5.4.1
sphinx: 1.6.3
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.0
openpyxl: 2.4.8
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.2
lxml: 4.1.0
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.1.13
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: