You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the SQL table contains integer columns with NULL values, the returned df has them converted to floats. I understand that it is currently not possible with pandas to have integer columns with NULL values. However, I want to be able to tell pandas NOT to convert the values to floats, in order not to loose precision. This is possible for other similar functions in pandas, such as read_csv, which has the argument dtype. Why not for read_sql??
Code Sample
Problem description
If the SQL table contains integer columns with NULL values, the returned df has them converted to floats. I understand that it is currently not possible with pandas to have integer columns with NULL values. However, I want to be able to tell pandas NOT to convert the values to floats, in order not to loose precision. This is possible for other similar functions in pandas, such as read_csv, which has the argument dtype. Why not for read_sql??
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Darwin
OS-release: 16.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.2
pytest: 3.0.5
pip: 9.0.1
setuptools: 36.0.1
Cython: 0.25.2
numpy: 1.12.1
scipy: 0.19.1
xarray: None
IPython: 6.1.0
sphinx: 1.5.1
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: 1.2.1
tables: 3.3.0
numexpr: 2.6.2
feather: None
matplotlib: 2.0.2
openpyxl: 2.4.1
xlrd: 1.0.0
xlwt: 1.2.0
xlsxwriter: 0.9.6
lxml: 3.8.0
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: 1.1.5
pymysql: None
psycopg2: None
jinja2: 2.9.4
s3fs: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: