Evalml: Update pipeline and components to return Woodwork data structures

Created on 4 Nov 2020  ·  5Comments  ·  Source: alteryx/evalml

1393 updated pipelines to accept Woodwork data structures, and #1288 addresses updating pipelines and components to accept Woodwork data structures as input. However, the output for methods like transform and predict are still pandas DataFrames, which is odd. This issue tracks updating our methods to return Woodwork data structures.

Most helpful comment

It seems like the third option is the best, cleanest option. Hopefully the performance isn't impacted, but conceptually it seems sound. Thanks for bringing it to my attention...trying to wrap my head around all of the things.

All 5 comments

Punting this for now, given Woodwork is finalizing plans on big updates. If Woodwork becomes an extension of pandas, we may not want or need to do this.

@angela97lin and I checked in, and discussed a few implementation options:

  1. Have component graph evaluation pass pandas to each component. To indicate ww types to components, either add new fields to fit etc., or stick with the text featurizer pattern of using init parameters to indicate relevant columns. Disadvantage: ugly from API perspective, this is why we created woodwork in the first place.
  2. During component graph evaluation, pass woodwork to each component. Have each component return pandas. Disadvantage: a potential limitation is that components cannot change woodwork type of the input features or of newly generated features, except through changing the pandas dtype. We don't have any components which rely on this however.
  3. During component graph evaluation, pass woodwork to each component. Have each component return woodwork. Challenge: most components must convert to pandas internally to do transformations like adding features, deleting features or modifying features. After those transformations, we have to make sure the original woodwork types get into the new returned woodwork datatable, otherwise user-overridden settings will get lost, as they are today.

Status: @angela97lin is currently pursuing option 3 in #1668

Plan: we'll continue that strategy, keeping an eye out for reduced runtime due to multiple ww datatable instantiations. And we'll consider if there are any feature requests we should make to woodwork to make this easier. We'll also keep an eye out for any compelling options we may have missed so far.

@chukarsten @gsheni

It seems like the third option is the best, cleanest option. Hopefully the performance isn't impacted, but conceptually it seems sound. Thanks for bringing it to my attention...trying to wrap my head around all of the things.

Hacking on this and thinking some more:

The end goal is that we need some way to keep track of the original logical types that the user wants. This could be information held by the component graph, or passed along to each component which is responsible for then setting those types back after transforming some data. Currently pursuing 3, and adding the information to component graph since that's the easiest to test (rather than updating every component)... but on the component level, it wouldn't make sense.

Say a user specifies a Woodwork DataTable and explicitly converts a categorical col to natural language. The user passes that to a component. We need to convert to pandas to pass to external libraries, and we'd like to return a Woodwork object. If we simply call a Woodwork constructor, it would only take the inferred type (categorical), which is odd? So we should keep track of the original specified natural language type and convert before handing back to user.

Interesting to note is the standard scaler: it could take int columns and convert them to floats. If we then try to set the col back to the original type (int), we'll get yelled at for trying to convert floats to int when that's not safe. 😬

Update: had a quick discussion with @dsherry and @chukarsten. I'm currently implementing #3, but having the component graph handle keeping track of the original user types and updating that information as it gets passed from component to component. This works okay and gets us to a place where AutoML / pipelines work, but after #1668 is merged, we should tackle handling this on the component level and removing this code from component graph.

My next to-dos: fix index tests from updating branch from main, clean up comments and file issues that can be addressed not-related to this PR (just general cleanup code). Once code is more clean, look for redundancies and profile to see where this huge time difference is coming from / what we can do about it.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

SydneyAyx picture SydneyAyx  ·  3Comments

chukarsten picture chukarsten  ·  4Comments

angela97lin picture angela97lin  ·  4Comments

npapan69 picture npapan69  ·  4Comments

dsherry picture dsherry  ·  5Comments