-
-
Notifications
You must be signed in to change notification settings - Fork 18k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow custom metadata to be attached to panel/df/series? #2485
Comments
storage of this data is is pretty easy to implement in HDFStore. general thoughts on meta data:
specific to HDFStore:
|
pytables it is a very good fit in terms of features, but:
|
oh - was not suggesting we use this as a backend for specific storage was just pointing out that HDFStore can support meta deta if pandas structures do to answer your questions
|
+1 for all meta data living under a single attribute I'm against allowing non-serializable objects as metadata, at all. But not sure in any case, a hook+type tag mechanism would allow users to plant ids of external |
what do you mean by not in memory capable? HDF5 has an in memory+ stdout writer and pytables support has been added On Tue, Dec 11, 2012 at 8:31 AM, jreback notifications@github.com wrote:
|
oh. I wasn't aware of that and didn't find anythig in the online docs. |
Thanks for including me on this request y-p. IMO, it seems like we should not try to prohibit objects as metadata based on their serialization capacity. I only say this because how would one account for every possible object? For example, Chaco plots from the Enthought Tool Suite don't serialize easily, but who would know that unless they tried. I think that it's best to let users put anything as metadata, and if it can't serialize, then they'll know when an error is thrown. It is also possible to have the program serialize everything but the metadata, and then just alert the user that this aspect has been lost. Does anyone here know the pandas source code well enough to understand how to implement something like this? I really don't have a clue, but hope this isn't asking too much of the developers. Also, I think this addition will be a nice way to appease people who are always looking to subclass a dataframe. *up vote for adding attribute being called 'meta' |
Last time I checked, HDF5 has a limit on the size of the AttributeSet. I had to get around it by having my store object encapsulate a directory, with .h5 and pickled meta objects. |
I think that adding metadata to the DataFrame object requires that it serialize and work with all backends (pickle, hdf5, etc). Which probably means restricting the type of metadata that can be added. There are corner cases to pickling custom classes that would become pandas problems. |
Hi guys. I'm a bit curious about something. This fix is currently addressing adding custom attributes to a dataframe. The values of these attributes, they can be python functions no? If so, this might be a workaround to adding custom instance methods to a dataframe. I know some people way back when were interested in this possibility. I think the way this could work is the dataframe should have a new method, call it... I dunno, add_custom_method(). This would take in a function, then add the function to the 'meta' attribute dictionary, with some sort of traceback to let the program know it is special. When the proposed new machinery assigns custom attributes to the new dataframe, it also may be neat to automatically promote such a function to an instance method. If it could do that, then we would have a way to effectively subclass a DataFrame without actually doing so. This is likely overkill for the first first go around, but maybe something to think about down the road. |
@dalejung do you have a link to the AttributeSet limit?
|
@jreback: Thanks for pointing this out man. I've heard of monkeypatching instance methods, but always thought it was more of a colloquialism for something more difficult. Thanks for showing me this. |
@jreback http://www.hdfgroup.org/HDF5/doc/UG/13_Attributes.html#SpecIssues maybe? It's been awhile and it could be that pytables hasn't implemented new HDF5 features. Personally, I had a dataset with ~40k items of metadata. Nothing complicated, just large. It was much easier to just pickle that stuff separately and use HDF for the actual data. |
@dalejung thanks for the link....I am not sure of use-cases for meta data beyond simple structures anyhow....if you have regular data you can always store as separate structures or pickle or whatever.... |
@hugadams np....good luck |
@jreback sure, but that's kind of the state now. You can use DataFrames as attributes of custom classes. You can keep track of your metadata separately. My point is that there would be an expectation for the DataFrame metadata serialization to work. The HDF5 limit is worse because it's based on size and not type, which means it can work until it suddenly does not. There are always going to be use-cases we don't think of. Adding a metadata attribute that sometimes saves will be asking for trouble. |
Looks like the for and against of the thorny serialization issue are clear. Here is another thorny issue - what's the semantics of propegating meta through operations? df1.meta.observation_date = "1/1/1981"
df1.meta.origin = "tower1"
df2.meta.observation_date = "1/1/1982"
df2.meta.origin = "tower2"
df3=pd.concat(df1,df2)
# or merge, addition, ix, apply, etc' Now, what's the "correct" meta for df3?
I'd be interested to hear specific examples of the problems you hope this will solve for you, ` |
@y-p I agree that propagation logic gets wonky. From experience, whether to propagate meta1/meta2/nothing is specific to the situation and doesn't follow any rule. Maybe the need for metadata would be fulfilled by easier composition tools? For example, I tend to delegate attribute calls to the child dataframe and also connect the repr/str. There are certain conveniences that pandas provides that you lose with a simple composition. Thinking about it, an api like the numpy array might be useful to allow composition classes to substitute for DataFrames. |
Hi y-p. You bring up very good points in regard to merging. My thoughts would be that merged quantities that share keys should store results in a tuple, instead of overwriting; however, this is still a unfavorable situation. You know, once the monkey patching was made clear to me by jreback, I realized that I could most likely get all the functionality I was looking for in custom attributes. Perhaps what would be more helpful at this point, rather than custom attributes, would be a small tutorial on the main page about how to monkey patch and customize pandas DataStructures. In my personal situation, I no longer feel that custom metadata would really make or break my projects if monkey patching is adequate; however, you guys seem to have a better overview of pandas, so I think that it really is your judgement call if the new pros of metadata would outweigh the cons. |
Thanks for all the ideas, here is my summary:
Dropping the milestone for now but will leave open if someone has more to add. |
Hey y-p. Thanks for leaving this open. It turns out that monkey patching has not solved my problem as I originally thought it would. Yes, monkey patching does allow one to add custom instance methods and attributes to a dataframe; however, any function that results in a new dataframe will not retain the values of these custom attributes. From an email currently on the mailing list:
I've put together a custom dataframe for spectroscopy that I'm very excited about putting at the center of a new spectroscopy package; however, realized that every operation that returns a new dataframe resets all of my custom attributes. The instance methods and slots for the attributes are retained, so this is better than nothing, but still is going to hamper my program. The only workaround I can find is to add some sort of attribute transfer function to every single dataframe method that I want to work with my custom dataframe. Thus, the whole point of making my object a custom dataframe is lost. With this in mind, I think monkey patching is not adequate unless there's a workaround that I'm not aware of. Will see if anyone replies on the mailing list. |
@hugadams you are probably much better off to create a class to hold both the frame and the meta and then forward methods as needed to handle manipulations...something like
depending on what exactly you need to do, the following will work
and then you can custom serialization, object creation, etc only gets tricky when you do mutations
you can even handle this by defining you prob have a limited set of operations that you really want to support, power users can just hth |
Thanks for the input. I will certainly keep this in mind if the metadata idea of this thread never reaches fruition, as it seems to be the best way forward. Do you know offhand how I can implement direct slicing eg: o['col1'] instead of o.df['col1'] I wasn't sure how to transfer that functionality to my custom object without a direct call to the underlying dataframe. Thanks for pointing out the mul redefintion. This will help me going forward. This really does feel like a roundabout solution to the Dataframe's inability to be subclassed. Especially if my custom object were to evolve with pandas, this would require maintenance to keep it synced up with changes to the Dataframe API. What if we do this- Using jreback's example, we create a generic class with the specific intention of being subclassed for custom use? We can include the most common Dataframe methods and update all the operators accordingly. Then, hopeless fools like me who come along with the intent to customize have a really strong starting point. I think that pandas' full potential has yet to be recognized by the research community, and anticipate it will diffuse into many more scientific fields. As such, if we could present them with a generic class for customizing dataframes, then researchers may be more inclined to build packages around pandas, rather than coming up with their own ad-hoc datastructures. |
There are only a handful of methods you prob need to worry about, you can always access df anyhow depends on what you want the user to be able to do with your object for example you could redefine can you provide an example of what you are trying to do?
|
All I'm doing is creating a dataframe for spectral data. As such, it has a special index type that I've written called "SpecIndex" and several methods for transforming itself to various representations of data. It also has special methods for extending how temporal data is managed. In any case, these operations are well-contained in my monkey patched version, and also would be easily implemented in a new class formalism as you've shown. After this, it really should just quack. Besides these spectroscopic functions and attributes, it should behave like a dataframe. Therefore, the most common operations on the dataframe, I would prefer to be seemless and promote to instance methods. I want to encourage users to learn pandas and use this tool for exploratory spectroscopy. As such, I'm trying to intercept any inconsistencies ahead of time like the one you pointed out about o.df=o.df * 5. Will I have to change the behavior of all the basic operators (eg * / + -) or just *? Any caveat like this, I'd like to correct in advance. In the end, I want the class layer itself to be as invisible as possible. Do any more of these gotchas that come to mind? |
It's best to think of Pandas objects like you do integers. If you had a hypothetical Person object, its height would just be a number. The number would have no idea it was a height or what unit it was in. It's just there for numerical operations. I think when the DataFrame is the primary data object this seems weird. But imagine that the Person object had a weight_history attribute. It wouldn't make sense to subclass a DataFrame to hold that attribute. Especially if other Pandas objects existed in Person data. subclassing/metadata will always run into issues when doing exploratory analysis. Does SubDataFrame.tail() return a SubDataFrame? If it does, will it keep the same attributes? Do we want to make copy of the dict for all ops like +/-*? After a certain point it becomes obvious that you're not working with a Person or SpectralSeries. You're working on an int or a DataFrame. In the same way that |
Would the features offered by xarray be somthing that can be adopted here? They have data attributes. If pandas could get the same features, this would be great. |
I think xarrayis what you want. You may also try this metadataframe class I wrote a few years ago. It may You should be able to download that file, then just make a class that has df = MetaDataframe() I thought that after 0.16, it was possible to simply subclass a dataframe, IE class MyDF(DataFrame) Or is this not the case? On Sat, Jan 23, 2016 at 8:28 AM, DaCoEx notifications@github.com wrote:
Adam Hughes |
Here's what I was talking about: http://pandas.pydata.org/pandas-docs/stable/internals.html#override-constructor-properties On Sat, Jan 23, 2016 at 1:32 PM, Adam Hughes hugadams@gwmail.gwu.edu
Adam Hughes |
So did you want to express that all aiming at using metadata may better use xarray? |
Just to be clear, xarray does support adding arbitrary metadata, but not automatic unit conversion. We could hook up a library like pint to handle this, but it's difficult to get all the edge cases working until numpy has better dtype support. |
I think 'automatic unit conversion based on metadata attached to series' is On 25 January 2016 at 17:14, Stephan Hoyer notifications@github.com wrote:
|
This is quite simple in current versions of pandas. I am using a sub-class here for illustration purposes. unambiguous propogation would be quite easy, and users could add in there own
Would take a patch for this, the modification are pretty straightforward, its the testing that is the key here. |
@jreback is there a generic way to persist metadata amongst all transforms applied to a dataframe including groupbys? Or would one have to go through and override a lot of methods' to call |
@postelrich for most/all things, For |
I've been working on an implementation of this that handles the propagation problem by making the Metadata object itself subclass Series. Then patch Series to relay methods to Metadata. Roughly:
So it is up to the individual MetaDatum classes to figure out how to propagate. I've generally got this working. The part that I have not gotten working is the desired |
Propagation of attributes (defined in _metadata) gives me some headaches... Based on the code of jreback, I've tried the following:
As jreback mentioned: there should be made choices: what to do with the appended atttributes. EDIT: more headache is more better, it pushes me to think harder :). Solved it, by stealing the finalize solution which GeoPandas provided. finalize works pretty good indeed. However, I'm not experienced enough to perform the testing. |
Can't we just put metadata in the column name and change how columns are accessed? E.g. ["id"] would internally translate to {"name": "id"}. Don't know the internals of pandas, so sorry if this might be a little naive. To me it just seems that the column name is really consistent across operations |
My use case would be adding a description to "indicator variables" (just 0/1) which are otherwise look like |
I think we have |
related:
#39 (column descriptions)
#686 (serialization concerns)
#447 (comment) (Feature request, implementation variant)
Ideas and issues:
The text was updated successfully, but these errors were encountered: